CN116402554B - Advertisement click rate prediction method, system, computer and readable storage medium - Google Patents

Advertisement click rate prediction method, system, computer and readable storage medium Download PDF

Info

Publication number
CN116402554B
CN116402554B CN202310667085.8A CN202310667085A CN116402554B CN 116402554 B CN116402554 B CN 116402554B CN 202310667085 A CN202310667085 A CN 202310667085A CN 116402554 B CN116402554 B CN 116402554B
Authority
CN
China
Prior art keywords
feature
matrix
features
training
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310667085.8A
Other languages
Chinese (zh)
Other versions
CN116402554A (en
Inventor
姚尧之
黄亚雄
廖常训
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Moment Interactive Technology Co ltd
Original Assignee
Jiangxi Moment Interactive Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Moment Interactive Technology Co ltd filed Critical Jiangxi Moment Interactive Technology Co ltd
Priority to CN202310667085.8A priority Critical patent/CN116402554B/en
Publication of CN116402554A publication Critical patent/CN116402554A/en
Application granted granted Critical
Publication of CN116402554B publication Critical patent/CN116402554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Strategic Management (AREA)
  • Computing Systems (AREA)
  • Finance (AREA)
  • General Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physiology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method, a system, a computer and a readable storage medium for predicting advertisement click rate, wherein the method comprises the steps of obtaining running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion; performing migration treatment on the test sample set to obtain a treated test feature set; performing feature vector conversion on the processing test feature set to obtain a test feature vector; the method comprises the steps of sequentially carrying out migration processing and feature vector conversion on a training sample set to obtain training feature vectors, inputting the training feature vectors into a preset neural network prediction model for training to obtain a training neural network prediction model, inputting test feature vectors into the training neural network prediction model to output a prediction result of advertisement click rate, extracting links and information among features, and further improving accuracy and prediction speed of model prediction.

Description

Advertisement click rate prediction method, system, computer and readable storage medium
Technical Field
The invention belongs to the technical field of advertisement click rate prediction, and particularly relates to an advertisement click rate prediction method, an advertisement click rate prediction system, a computer and a readable storage medium.
Background
The advertisement click rate is an important evaluation basis of online advertisement marketing, but advertisement click data randomness and environmental reasons lead to very sparse and unbalanced advertisement click data, when characteristics of advertisement data are extracted, the extracted characteristics are not closely related due to the randomness of the advertisement data, the relation between the characteristics and corresponding characteristic information cannot be accurately obtained, and the extracted characteristics are high-dimensional sparse characteristics due to the sparsity of the advertisement data, so that when a model predicts the click rate according to the extracted characteristics, the error is large, and the advertisement click rate cannot be accurately predicted according to the advertisement data.
Disclosure of Invention
In order to solve the technical problems, the invention provides an advertisement click rate prediction method, an advertisement click rate prediction system, a computer and a readable storage medium, which are used for solving the technical problems in the prior art.
In a first aspect, the present invention provides the following technical solutions, and an advertisement click rate prediction method, where the method includes:
acquiring running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion;
Performing migration treatment on the test sample set to obtain a treated test feature set;
performing feature vector conversion on the processing test feature set to obtain a test feature vector;
sequentially performing migration processing and feature vector conversion on the training sample set to obtain a training feature vector, inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, and inputting the test feature vector into the training neural network prediction model to output a prediction result of advertisement click rate;
the step of performing migration processing on the test sample set to obtain a processed test feature set includes:
calculating the characteristic values and attribute values of all the characteristic data in the test sample set, selecting the characteristic data with the null characteristic value or the same attribute value in the test sample set as common characteristics, and eliminating the common characteristics in the test sample set to obtain key characteristics;
based on the key features, constructing a similarity matrix A according to a K-nearest neighbor algorithm:
in the method, in the process of the invention,indicates the number of neighbors->Indicate->Euclidean distance of individual key features, +. >First->The Euclidean distance of the key features;
and constructing an adjacency graph according to the similarity matrix, solving selected features based on the adjacency graph, and performing migration mapping processing on the selected features to obtain processing test features.
Compared with the prior art, the application has the beneficial effects that: firstly, acquiring running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion; performing migration treatment on the test sample set to obtain a treated test feature set; then, carrying out feature vector conversion on the processing test feature set to obtain a test feature vector; and finally, sequentially carrying out migration processing and feature vector conversion on the training sample set to obtain a training feature vector, inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, inputting the test feature vector into the training neural network prediction model for outputting a prediction result of the advertisement click rate.
Preferably, the step of constructing an adjacency graph according to the similarity matrix and solving a selection feature based on the adjacency graph, and performing migration mapping processing on the selection feature to obtain a processing test feature includes:
constructing an adjacency graph based on the key features and determining a diagonal matrix C corresponding to the adjacency graph:
in the method, in the process of the invention,indicate->Similarity matrix corresponding to each key feature, +.>Representing the number of key features;
determining a Laplacian matrix of the adjacency graph based on the diagonal matrix CAnd according to the Laplace matrix +.>Solving to obtain a plurality of selection features->
In the method, in the process of the invention,is characteristic value (I)>For key features, ->Is of special importanceA transposed matrix of features;
migration mapping the selected features into feature views and calculating mapping errors of the selected features
In the method, in the process of the invention,for selecting the number of features +.>For mapping the previous selection feature +.>For the selection feature after mapping, +.>A mapping matrix for feature migration;
selecting the selection feature corresponding to the migration error smaller than the error threshold as the processing test feature set.
Preferably, the step of performing feature vector conversion on the processing test feature set to obtain a test feature vector includes:
Extracting a positive sample data set of the processing test feature set, selecting grouping features in the positive sample data set, segmenting the positive sample data set based on the grouping features to obtain a plurality of sub-data sets, and selecting sequence features in each sub-data set;
calculating the difference degree and the coverage rate based on the grouping features and the sequence features, and grouping the features based on the difference degree and the coverage rate to obtain a plurality of feature combinations consisting of grouping features and sequence features;
a context sequence in each of the feature combinations is extracted based on the sequence features, and a number of test feature vectors are generated based on the context sequences.
Preferably, the step of calculating the degree of difference and the coverage based on the grouping feature and the sequence feature includes:
calculating the distribution area of the sequence features corresponding to each sub-data set, calculating the non-overlapping area proportion of each two sub-data sets, and taking the average value of all the non-overlapping area proportions as the difference degree;
and calculating the Lorentzian curves of the processing test feature set and the positive sample data set corresponding to the sequence features, and calculating the ratio between the lower areas of the two Lorentzian curves to obtain coverage rate.
Preferably, the step of extracting a context sequence in each of the feature combinations based on the sequence features, and generating a number of test feature vectors based on the context sequence includes:
extracting sequence features corresponding to each sub-data set in the feature combination as a context sequence;
calculating the co-occurrence times of each sub-data set under the sequence characteristics in the context sequence, and generating an association matrix based on the co-occurrence times;
for the association matrixThere is->Will->Order matrix diagonal matrix->The characteristic values of (2) are sequentially arranged from top left to bottom right from big to small, and the maximum top G characteristic values are selected to obtain a conversion matrix +.>The conversion matrix->Split folding to->Matrix->And +.>Matrix->Transposed matrix of->In (a):
in the method, in the process of the invention,for the first folding matrix->Is a second folding matrix;
defining a loss function, iteratively solving a first folding matrix and a second folding matrix according to the loss function, and decomposing the first folding matrix and the second folding matrix to obtain a plurality of test feature vectors, wherein the loss functionThe expression of (2) is:
in the method, in the process of the invention,is the +. >Line->Column element->Is the +.>Line->Column element->For the second folding matrix +.>Line->Column element->Is the +.>All elements of a row, < >>First folding matrix->All elements of a column, < >>And->Is a regularization coefficient.
Preferably, the step of inputting the training feature vector into a preset neural network prediction model to perform training to obtain a training neural network prediction model, and inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate includes:
initializing model weights of the preset neural network prediction model through genetic operators, inputting training feature vectors into the preset neural network prediction model for training, and calculating corresponding training errors;
calculating the fitness of the genetic operator according to the training error, testing the genetic operator according to the fitness, and selecting an excellent genetic operator;
sequentially copying, cross-selecting and mutating the excellent genetic operators to perform iterative optimization on the preset neural network prediction model for a plurality of times and obtain a plurality of iterative operators;
Selecting an iteration operator with the minimum training error as a final operator, mapping the final operator to the preset neural network prediction model to obtain a final weight, and inputting the final weight into the preset neural network prediction model to obtain a training neural network prediction model;
and inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate.
In a second aspect, the present invention provides a system for predicting an advertisement click rate, the system comprising:
the sample dividing module is used for obtaining running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion;
the processing module is used for carrying out migration processing on the test sample set to obtain a processed test feature set;
the conversion module is used for carrying out feature vector conversion on the processing test feature set so as to obtain a test feature vector;
the prediction module is used for sequentially carrying out migration processing and feature vector conversion on the training sample set to obtain a training feature vector, inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, and inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate;
The processing module comprises:
the feature screening sub-module is used for calculating the feature values and the attribute values of all feature data in the test sample set, selecting feature data with the same feature value as null or attribute value in the test sample set, taking the feature data as common features, and eliminating the common features in the test sample set to obtain key features;
the similarity matrix construction submodule is used for constructing a similarity matrix A based on the key features and according to a K-nearest neighbor algorithm:
in the method, in the process of the invention,indicates the number of neighbors->Indicate->Euclidean distance of individual key features, +.>First->The Euclidean distance of the key features;
and the migration submodule is used for constructing an adjacency graph according to the similar matrix, solving the selection feature based on the adjacency graph, and carrying out migration mapping processing on the selection feature to obtain the processing test feature.
In a third aspect, the present invention provides a computer, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for predicting an advertisement click rate when executing the computer program.
In a fourth aspect, the present invention provides a readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements the advertisement click rate prediction method described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an advertisement click rate prediction method according to a first embodiment of the present invention;
FIG. 2 is a detailed flowchart of step S2 in the advertisement click rate prediction method according to the first embodiment of the present invention;
FIG. 3 is a detailed flowchart of step S23 in the advertisement click rate prediction method according to the first embodiment of the present invention;
FIG. 4 is a detailed flowchart of step S3 in the advertisement click rate prediction method according to the first embodiment of the present invention;
FIG. 5 is a detailed flowchart of step S32 in the advertisement click rate prediction method according to the first embodiment of the present invention;
FIG. 6 is a detailed flowchart of step S33 in the advertisement click rate prediction method according to the first embodiment of the present invention;
FIG. 7 is a detailed flowchart of step S4 in the advertisement click rate prediction method according to the first embodiment of the present invention;
FIG. 8 is a block diagram illustrating a second embodiment of an advertisement click rate prediction system according to the present invention;
fig. 9 is a block diagram of a hardware structure of a computer according to another embodiment of the present invention.
Embodiments of the present invention will be further described below with reference to the accompanying drawings.
Detailed Description
In order that the invention may be readily understood, a more complete description of the invention will be rendered by reference to the appended drawings. Several embodiments of the invention are presented in the figures. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Example 1
As shown in fig. 1, in a first embodiment of the present invention, the present invention provides a method for predicting an advertisement click rate, where the method includes:
S1, acquiring running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion;
specifically, the running log data of the advertisement comprises advertisement information, information data clicked by a user and data not clicked, and the proportion of the clicked information data of the advertisement is smaller in the whole running log data, and the advertisement data is required to be subjected to subsequent processing due to unbalance and sparsity of the advertisement data;
meanwhile, the whole operation log data is specifically a section of time sequence data, so that the operation log data can be divided into a test sample set and a training sample set according to a proportion, and meanwhile, in order to ensure the accuracy of model training, the preset proportion is 2:8, wherein the test sample data set accounts for 20% of the total number, and the training sample data accounts for 80% of the total number.
S2, performing migration treatment on the test sample set to obtain a treated test feature set;
specifically, in the step, the sparsity and unbalance of the data can be fully considered by carrying out migration processing on the test sample set, the predicted data information quantity of the input model can be improved by utilizing feature migration and mapping of the feature view, meanwhile, the relation between sparse features can be established, the effectiveness of the feature information is ensured, and the accuracy of model prediction can be improved.
As shown in fig. 2, the step S2 includes:
s21, calculating the characteristic values and the attribute values of all the characteristic data in the test sample set, selecting the characteristic data with the null characteristic value or the same attribute value in the test sample set as common characteristics, and eliminating the common characteristics in the test sample set to obtain key characteristics;
specifically, in the advertisement data, the proportion of the advertisement data clicked by the user to the total data is smaller, so in the step, the data characteristics clicked by the user are required to be searched to improve the prediction accuracy of the model, in the test sample set, the characteristics with the same characteristic value or null characteristic value do not help the prediction of the click rate, the operation difficulty of the characteristic migration processing is increased, meanwhile, the prediction time of the model is also reduced, the common characteristics are removed, the key characteristics are reserved, and the key characteristics can reflect the characteristics of the advertisement clicked by the user.
S22, constructing a similarity matrix A based on the key features and according to a K-nearest neighbor algorithm:
in the method, in the process of the invention,indicates the number of neighbors->Indicate->Euclidean distance of individual key features, +.>First->Euclidean distance of each key feature.
S23, constructing an adjacency graph according to the similarity matrix, solving selected features based on the adjacency graph, and performing migration mapping processing on the selected features to obtain processing test features;
as shown in fig. 3, the step S23 includes:
s231, constructing an adjacent graph based on the key features and determining a diagonal matrix C corresponding to the adjacent graph:
in the method, in the process of the invention,indicate->Similarity matrix corresponding to each key feature, +.>Representing the number of key features;
when constructing the adjacency graph, a diagonal matrix is correspondingly generated, wherein the diagonal matrix specifically represents a matrix formed by the weights of each data node connection in the key features, and the matrix can be used for calculating the Laplace matrix of the adjacency graph.
S232, determining a Laplacian matrix of the adjacency graph based on the diagonal matrix CAnd according to the Laplace matrix +.>Solving to obtain ifDry selection feature->
In the method, in the process of the invention,is characteristic value (I)>For key features, ->Transposed matrix for key features;
specifically, in this step, a solution featuring the above formula is selected, whileThe selected feature is the first several features selected from the key features, the number of the feature values is the same as that of the selected feature, and the feature values are sequentially arranged from small to large.
S233, migration mapping the selected features into feature views, and calculating mapping errors of the selected features
In the method, in the process of the invention,for selecting the number of features +.>For mapping the previous selection feature +.>For the selection feature after mapping, +.>A mapping matrix for feature migration;
specifically, when one feature view exists, K views are generated after the selected feature is mapped to the feature view, and when two feature views exist, K (K-1)/2 views are generated, so that the relation between important features can be found after the selected feature is mapped to the feature view, and more prediction information can be obtained.
S234, selecting the selected characteristic corresponding to the migration error smaller than the error threshold as a processing test characteristic set;
in particular, in an ideal case, the migration error approaches 0, but in the actual features, the number of selected features corresponding to the migration error approaches 0 is too small, once the feature quantity is small, accurate expression data cannot be reflected, and accuracy of model prediction is affected, so that the number of selected features can be increased by setting an error threshold value, so that the number of processing test feature sets can be increased, prediction accuracy is improved, and it is worth mentioning that the size of the error threshold value can be determined according to the sample quantity of the selected features.
S3, carrying out feature vector conversion on the processing test feature set to obtain a test feature vector;
specifically, the intra-domain relationship of the high-dimensional sparse feature can be learned by performing feature vector conversion on the processing test feature set, so that feature information contained in the feature is fully acquired, and the feature is fully expressed.
As shown in fig. 4, the step S3 includes:
s31, extracting a positive sample data set of the processing test feature set, selecting grouping features in the positive sample data set, segmenting the positive sample data set based on the grouping features to obtain a plurality of sub data sets, and selecting sequence features in each sub data set;
specifically, since the common features have been omitted in the above step, there may be some mixed features in the processing test feature set, that is, the clicked data is mixed with the non-clicked data, so in this step, the features corresponding to the clicked data are extracted from the processing test feature set to obtain a positive sample data set, and the positive sample data set can fully reflect the feature information, where the grouping features are arbitrarily selected features, and in the actual segmentation process, one grouping feature is arbitrarily selected by each segmentation, so that the grouping features with the same number as the elements in the positive sample data set can be obtained;
Correspondingly, the sequence features are features of the generated sequence, and each of the remaining sub-data sets corresponds, so that the number of grouping features is the same as the number of sequence features, and each feature combination comprises one grouping feature and one sequence feature in the subsequent feature combination.
S32, calculating the difference degree and the coverage rate based on the grouping features and the sequence features, and grouping the features based on the difference degree and the coverage rate to obtain a plurality of feature combinations consisting of grouping features and sequence features;
as shown in fig. 5, the step S32 includes:
s321, calculating the distribution area of the sequence features corresponding to each sub-data set, calculating the non-overlapping area proportion of each two sub-data sets, and taking the average value of all the non-overlapping area proportions as the difference degree;
specifically, since the number of sequence features is several, the calculated difference degree is also several groups, the difference degree between the sub-data sets can be judged by the difference degree, and the larger the difference degree is, the more special the sequence in the sub-data is, and different feature combinations can be judged according to the difference degree according to specific conditions.
S322, calculating the Lorentzian curves of the processing test feature set and the positive sample data set corresponding to the sequence features, and calculating the ratio between the lower areas of the two Lorentzian curves to obtain coverage rate;
Specifically, the lower area in this step refers to the area enclosed between the lorentz curve and the X-axis, in step S321, the values of the elements in the positive sample data set may be only a subset of all the value sets, and the distribution degree may have a certain difference from the original distribution, so in this step, the feature combination is determined by calculating the intra-feature coverage between the original processing test feature set and the extracted positive sample data set;
it should be noted that, in the above steps S321 and S322, two determination indexes of the feature combinations are provided, which are respectively a difference degree and a coverage rate, and based on the two indexes, the applicability of the various feature combinations to the distribution representation can be determined, and the specific determination process needs to refer to the subsequent experimental results.
S33, extracting a context sequence in each feature combination based on the sequence features, and generating a plurality of test feature vectors based on the context sequence;
specifically, the context sequence represents context information having meaning of expression features, which can express meaning of a target, while elements in the context sequence have similarity, which is applied in a statistical scenario to express feature information included between a plurality of feature combinations.
As shown in fig. 6, the step S33 includes:
s331, extracting sequence features corresponding to each sub-data set in the feature combination as a context sequence;
specifically, when extracting a context sequence, the extraction process needs to be optimized, the overlong sequence is broken, and the overlong sequence is deleted, so that the characteristic expression capability of the context sequence is ensured.
S332, calculating the co-occurrence times of each sub-data set under the sequence characteristics in the context sequence, and generating an association matrix based on the co-occurrence times;
specifically, in the context sequence, the elements in the context sequence may be regarded as one marker, and the co-occurrence number represents a combination of consecutive markers appearing in the same sequence, so that in the association matrix, each element in a row represents the number of co-occurrences of the markers in the remaining rows and the markers in the present row, that is, the co-occurrence number, and thus each row in the association matrix may be represented as a representation of a certain marker.
S333, for the association matrixThere is->Will->Order matrix diagonal matrix->The characteristic values of (2) are sequentially arranged from top left to bottom right from big to small, and the maximum top G characteristic values are selected to obtain a conversion matrix +. >The conversion matrix->Split folding to->Matrix->And +.>Matrix->Transposed matrix of->In (a):
in the method, in the process of the invention,for the first folding matrix->Is a second folding matrix;
in particular, the method comprises the steps of,the conventional singular value decomposition process is adopted, but the conventional singular value decomposition is applicable to the orthogonal matrix, but the correlation matrix in the embodiment is not necessarily the orthogonal matrix, so in the embodiment, one +_, is adopted>Order matrix diagonal matrix->Opening root number, splitting it into two +.>Multiplying and dividing two ++>Folded to->Matrix->And +.>Matrix->Transposed matrix of->Further forming a first folding matrix and a second folding matrix;
in order to reduce the difficulty and complexity of the operation, in this embodiment, the elements in the diagonal matrix of the matrix are arranged from large to small and the top G feature values are selected to reduce the difficulty and complexity of the operationDegree, at the same time, since the first G feature values are selectedMiddle and late->Column value is null, +.>Middle and late->The values of the rows are null, so that a first folding matrix and a second folding matrix can be generated.
S334, defining a loss function, iteratively solving a first folding matrix and a second folding matrix according to the loss function, and decomposing the first folding matrix and the second folding matrix to obtain a plurality of test feature vectors, wherein the loss function The expression of (2) is:
in the method, in the process of the invention,is the +.>Line->Column element->Is the +.>Line->Column element->For the second folding matrix +.>Line->Column element->Is the +.>All elements of a row, < >>First folding matrix->All elements of a column, < >>And->Is a regularization coefficient;
specifically, by iteratively solving the matrix, the operation speed can be improved and the matrix decomposition time can be controlled.
S4, sequentially performing migration processing and feature vector conversion on the training sample set to obtain a training feature vector, inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, and inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate;
specifically, in this step, the training sample set needs to undergo the same processing steps as the test sample set, that is, the processing steps disclosed in the above step S2 and the above step S3, so that the consistency of the test sample and the training sample is ensured.
As shown in fig. 7, the step S4 includes:
S41, initializing model weights of the preset neural network prediction model through a genetic operator, inputting training feature vectors into the preset neural network prediction model for training, and calculating corresponding training errors.
S42, calculating the fitness of the genetic operators according to the training errors, testing the genetic operators according to the fitness, and selecting excellent genetic operators.
Specifically, the test in this step is a quality test, with the aim of selecting and inheriting competitive operators to the next generation.
S43, sequentially copying, cross-selecting and mutating the excellent genetic operators to perform iterative optimization on the preset neural network prediction model for a plurality of times and obtain a plurality of iterative operators;
specifically, the cross selection process is to select a plurality of isolates from the previous generation genetic operators as parent operators according to preset probability, copy the parent operators to the next generation, and the mutation treatment is to select any one operator randomly from the selected operators for inversion operation so as to obtain a new mutation operator, and repeat the process to evolve from generation to generate more and more accurate approximate solutions.
S44, selecting an iteration operator with the minimum training error as a final operator, mapping the final operator to the preset neural network prediction model to obtain a final weight, and inputting the final weight into the preset neural network prediction model to obtain a training neural network prediction model;
Specifically, the final operator with the smallest training error is determined and is the accurate optimal operator, and the final operator is mapped into the neural network, namely the model in the embodiment, and the model weight of the model is trained to obtain the final weight.
S45, inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate.
The first advantage of this embodiment is: firstly, acquiring running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion; performing migration treatment on the test sample set to obtain a treated test feature set; then, carrying out feature vector conversion on the processing test feature set to obtain a test feature vector; and finally, sequentially carrying out migration processing and feature vector conversion on the training sample set to obtain a training feature vector, inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, inputting the test feature vector into the training neural network prediction model for outputting a prediction result of the advertisement click rate.
Example two
As shown in fig. 8, in a second embodiment of the present invention, there is provided an advertisement click rate prediction system including:
the sample dividing module 1 is used for obtaining running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion;
the processing module 2 is used for performing migration processing on the test sample set to obtain a processed test feature set;
the conversion module 3 is used for carrying out feature vector conversion on the processing test feature set so as to obtain a test feature vector;
and the prediction module 4 is used for sequentially carrying out migration processing and feature vector conversion on the training sample set to obtain a training feature vector, inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, and inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate.
Wherein the processing module 2 comprises:
the feature screening sub-module is used for calculating the feature values and the attribute values of all feature data in the test sample set, selecting feature data with the same feature value as null or attribute value in the test sample set, taking the feature data as common features, and eliminating the common features in the test sample set to obtain key features;
The similarity matrix construction submodule is used for constructing a similarity matrix A based on the key features and according to a K-nearest neighbor algorithm:
in the method, in the process of the invention,indicates the number of neighbors->Indicate->Euclidean distance of individual key features, +.>First->The Euclidean distance of the key features;
and the migration submodule is used for constructing an adjacency graph according to the similar matrix, solving the selection feature based on the adjacency graph, and carrying out migration mapping processing on the selection feature to obtain the processing test feature.
The migration submodule includes:
the adjacency unit is used for constructing an adjacency graph based on the key features and determining a diagonal matrix C corresponding to the adjacency graph:
in the method, in the process of the invention,indicate->Similarity matrix corresponding to each key feature, +.>Representing the number of key features;
a feature determination unit for determining the Laplacian matrix of the adjacency graph based on the diagonal matrix CAnd according to the Laplace matrix +.>Solving to obtain a plurality of selection features->
In the method, in the process of the invention,is characteristic value (I)>For key features, ->Transposed matrix for key features;
an error calculation unit for migration mapping the selected feature to a featureIn the view, and calculate the mapping error of the selected feature
In the method, in the process of the invention, For selecting the number of features +.>For mapping the previous selection feature +.>For the selection feature after mapping, +.>A mapping matrix for feature migration;
and the error comparison unit is used for selecting the selection characteristic corresponding to the migration error smaller than the error threshold value as the processing test characteristic set.
The conversion module 3 comprises:
the extraction sub-module is used for extracting a positive sample data set of the processing test feature set, selecting grouping features in the positive sample data set, segmenting the positive sample data set based on the grouping features to obtain a plurality of sub-data sets, and selecting sequence features in each sub-data set;
the combination determination submodule is used for calculating the difference degree and the coverage rate based on the grouping feature and the sequence feature, and carrying out feature grouping based on the difference degree and the coverage rate so as to obtain a plurality of feature combinations consisting of grouping features and sequence features;
and the vector generation sub-module is used for extracting a context sequence in each feature combination based on the sequence features and generating a plurality of test feature vectors based on the context sequence.
The combination determination submodule includes:
the difference degree calculation unit is used for calculating the distribution area of the sequence features corresponding to each sub-data set, calculating the non-overlapping area proportion of each two sub-data sets, and taking the average value of all the non-overlapping area proportions as the difference degree;
And the coverage rate calculation unit is used for calculating the Lorentzian curves of the processing test feature set and the positive sample data set corresponding to the sequence features and calculating the proportion between the lower areas of the two Lorentzian curves so as to obtain the coverage rate.
The vector generation submodule includes:
a sequence extracting unit, configured to extract a sequence feature corresponding to each sub-dataset in the feature combination as a context sequence;
the incidence matrix generation unit is used for calculating the co-occurrence times of each sub-data set under the sequence characteristics in the context sequence, and generating an incidence matrix based on the co-occurrence times;
a folding unit for the incidence matrixThere is->Will->Order matrix diagonal matrix->The characteristic values of the (B) are sequentially arranged from top left to bottom right from top to bottom, and the top G characteristic values with the largest value are selected to obtain a conversion matrixThe conversion matrix->Split folding to->Matrix->And +.>Matrix->Transposed matrix of->In (a):
in the method, in the process of the invention,for the first folding matrix->Is a second folding matrix;
a decomposition unit, configured to define a loss function, iteratively solve a first folding matrix and a second folding matrix according to the loss function, and decompose the first folding matrix and the second folding matrix to obtain a plurality of test feature vectors, where the loss function The expression of (2) is:
in the method, in the process of the invention,is the +.>Line->Column element->Is the +.>Line->Column element->For the second folding matrix +.>Line->Column element->Is the +.>All elements of a row, < >>First folding matrix->All elements of a column, < >>And->Is a regularization coefficient.
The prediction module 4 includes:
the initialization sub-module is used for initializing the model weight of the preset neural network prediction model through a genetic operator, inputting training feature vectors into the preset neural network prediction model for training, and calculating corresponding training errors;
the fitness calculation sub-module is used for calculating the fitness of the genetic operator according to the training error, testing the genetic operator according to the fitness and selecting an excellent genetic operator;
the operator processing sub-module is used for sequentially copying, cross selecting and mutating the excellent genetic operators so as to perform iterative optimization on the preset neural network prediction model for a plurality of times and obtain a plurality of iterative operators;
the mapping sub-module is used for selecting an iteration operator with the minimum training error as a final operator, mapping the final operator to the preset neural network prediction model to obtain a final weight, and inputting the final weight into the preset neural network prediction model to obtain a training neural network prediction model;
And the prediction sub-module is used for inputting the test feature vector into the training neural network prediction model so as to output a prediction result of the advertisement click rate.
In other embodiments of the present application, a computer is provided in the embodiments of the present application, including a memory 102, a processor 101, and a computer program stored in the memory 102 and executable on the processor 101, where the processor 101 implements the advertisement click rate prediction method as described above when executing the computer program.
In particular, the processor 101 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 102 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 102 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, solid state Drive (Solid State Drive, SSD), flash memory, optical Disk, magneto-optical Disk, tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. Memory 102 may include removable or non-removable (or fixed) media, where appropriate. The memory 102 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 102 is a Non-Volatile (Non-Volatile) memory. In a particular embodiment, the Memory 102 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (Electrically Erasable Programmable Read-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (Electrically Alterable Read-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be Static Random-Access Memory (SRAM) or dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory FPMDRAM), extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory EDODRAM), synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory SDRAM), or the like, as appropriate.
Memory 102 may be used to store or cache various data files that need to be processed and/or communicated, as well as possible computer program instructions for execution by processor 101.
The processor 101 implements the above-described advertisement click rate prediction method by reading and executing computer program instructions stored in the memory 102.
In some of these embodiments, the computer may also include a communication interface 103 and a bus 100. As shown in fig. 9, the processor 101, the memory 102, and the communication interface 103 are connected to each other via the bus 100 and perform communication with each other.
The communication interface 103 is used to implement communications between modules, devices, units, and/or units in embodiments of the application. The communication interface 103 may also enable communication with other components such as: and the external equipment, the image/data acquisition equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
Bus 100 includes hardware, software, or both, coupling components of a computer to each other. Bus 100 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 100 may include a graphics acceleration interface (Accelerated Graphics Port), abbreviated AGP, or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) Bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an industry standard architecture (Industry Standard Architecture, ISA) Bus, a wireless bandwidth (InfiniBand) interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (Micro Channel Architecture, abbreviated MCa) Bus, a peripheral component interconnect (Peripheral Component Interconnect, abbreviated PCI) Bus, a PCI-Express (PCI-X) Bus, a serial advanced technology attachment (Serial Advanced Technology Attachment, abbreviated SATA) Bus, a video electronics standards association local (Video Electronics Standards Association Local Bus, abbreviated VLB) Bus, or other suitable Bus, or a combination of two or more of the foregoing. Bus 100 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
The computer can execute the advertisement click rate prediction method based on the obtained advertisement click rate prediction system, thereby realizing the prediction of the advertisement click rate.
In still other embodiments of the present application, in combination with the above-mentioned advertisement click rate prediction method, embodiments of the present application provide a technical solution, a readable storage medium storing a computer program thereon, where the computer program when executed by a processor implements the above-mentioned advertisement click rate prediction method.
Those of skill in the art will appreciate that the logic and/or steps represented in the flow diagrams or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (5)

1. An advertisement click rate prediction method, comprising:
acquiring running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion;
performing migration treatment on the test sample set to obtain a treated test feature set;
performing feature vector conversion on the processing test feature set to obtain a test feature vector;
Sequentially performing migration processing and feature vector conversion on the training sample set to obtain a training feature vector, inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, and inputting the test feature vector into the training neural network prediction model to output a prediction result of advertisement click rate;
the step of performing migration processing on the test sample set to obtain a processed test feature set includes:
calculating the characteristic values and attribute values of all the characteristic data in the test sample set, selecting the characteristic data with the null characteristic value or the same attribute value in the test sample set as common characteristics, and eliminating the common characteristics in the test sample set to obtain key characteristics;
based on the key features, constructing a similarity matrix A according to a K-nearest neighbor algorithm:
in the method, in the process of the invention,indicates the number of neighbors->Indicate->Euclidean distance of individual key features, +.>First->The Euclidean distance of the key features;
constructing an adjacency graph according to the similarity matrix, solving selected features based on the adjacency graph, and performing migration mapping processing on the selected features to obtain processing test features;
The step of constructing an adjacency graph according to the similarity matrix and solving selected features based on the adjacency graph, and performing migration mapping processing on the selected features to obtain processed test features comprises the following steps:
constructing an adjacency graph based on the key features and determining a diagonal matrix C corresponding to the adjacency graph:
in the method, in the process of the invention,indicate->Similarity matrix corresponding to each key feature, +.>Representing the number of key features;
determining a Laplacian matrix of the adjacency graph based on the diagonal matrix CAnd according to the Laplace matrix +.>Solving to obtain a plurality of selection features->
In the method, in the process of the invention,is characteristic value (I)>For key features, ->Transposed matrix for key features;
migration mapping the selected features into feature views and calculating mapping errors of the selected features
In the method, in the process of the invention,for selecting the number of features +.>For mapping the previous selection feature +.>For the selection feature after mapping, +.>A mapping matrix for feature migration;
selecting a selection feature corresponding to the migration error smaller than the error threshold as a processing test feature set;
the step of performing feature vector conversion on the processing test feature set to obtain a test feature vector includes:
Extracting a positive sample data set of the processing test feature set, selecting grouping features in the positive sample data set, segmenting the positive sample data set based on the grouping features to obtain a plurality of sub-data sets, and selecting sequence features in each sub-data set;
calculating the difference degree and the coverage rate based on the grouping features and the sequence features, and grouping the features based on the difference degree and the coverage rate to obtain a plurality of feature combinations consisting of grouping features and sequence features;
extracting a context sequence in each feature combination based on the sequence features, and generating a plurality of test feature vectors based on the context sequence;
the step of calculating the degree of difference and the coverage based on the grouping feature and the sequence feature includes:
calculating the distribution area of the sequence features corresponding to each sub-data set, calculating the non-overlapping area proportion of each two sub-data sets, and taking the average value of all the non-overlapping area proportions as the difference degree;
calculating lorentz curves of the processing test feature set and the positive sample data set corresponding to the sequence features, and calculating the proportion between the lower areas of the two lorentz curves to obtain coverage rate;
The step of extracting a context sequence in each of the feature combinations based on the sequence features and generating a number of test feature vectors based on the context sequence comprises:
extracting sequence features corresponding to each sub-data set in the feature combination as a context sequence;
calculating the co-occurrence times of each sub-data set under the sequence characteristics in the context sequence, and generating an association matrix based on the co-occurrence times;
for the association matrixThere is->Will->Order matrix diagonal matrix->The characteristic values of (2) are sequentially arranged from top left to bottom right from big to small, and the maximum top G characteristic values are selected to obtain a conversion matrix +.>Will convert the momentMatrix->Split folding to->Matrix->And +.>Matrix->Transposed matrix of->In (a):
in the method, in the process of the invention,for the first folding matrix->Is a second folding matrix;
defining a loss function, iteratively solving a first folding matrix and a second folding matrix according to the loss function, and decomposing the first folding matrix and the second folding matrix to obtain a plurality of test feature vectors, wherein the loss functionThe expression of (2) is:
in the method, in the process of the invention,is the +.>Line- >Column element->Is the +.>Line->Column element->For the second folding matrix +.>Line->Column element->Is the +.>All of the elements of the row,first folding matrix->All elements of a column, < >>And->Is a regularization coefficient.
2. The advertisement click rate prediction method according to claim 1, wherein the step of inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, and inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate comprises:
initializing model weights of the preset neural network prediction model through genetic operators, inputting training feature vectors into the preset neural network prediction model for training, and calculating corresponding training errors;
calculating the fitness of the genetic operator according to the training error, testing the genetic operator according to the fitness, and selecting an excellent genetic operator;
sequentially copying, cross-selecting and mutating the excellent genetic operators to perform iterative optimization on the preset neural network prediction model for a plurality of times and obtain a plurality of iterative operators;
Selecting an iteration operator with the minimum training error as a final operator, mapping the final operator to the preset neural network prediction model to obtain a final weight, and inputting the final weight into the preset neural network prediction model to obtain a training neural network prediction model;
and inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate.
3. An advertisement click-through rate prediction system, the system comprising:
the sample dividing module is used for obtaining running log data of advertisements to construct an original sample set, and dividing the original sample set into a test sample set and a training sample set according to a preset proportion;
the processing module is used for carrying out migration processing on the test sample set to obtain a processed test feature set;
the conversion module is used for carrying out feature vector conversion on the processing test feature set so as to obtain a test feature vector;
the prediction module is used for sequentially carrying out migration processing and feature vector conversion on the training sample set to obtain a training feature vector, inputting the training feature vector into a preset neural network prediction model for training to obtain a training neural network prediction model, and inputting the test feature vector into the training neural network prediction model to output a prediction result of the advertisement click rate;
The processing module comprises:
the feature screening sub-module is used for calculating the feature values and the attribute values of all feature data in the test sample set, selecting feature data with the same feature value as null or attribute value in the test sample set, taking the feature data as common features, and eliminating the common features in the test sample set to obtain key features;
the similarity matrix construction submodule is used for constructing a similarity matrix A based on the key features and according to a K-nearest neighbor algorithm:
in the method, in the process of the invention,indicates the number of neighbors->Indicate->Euclidean distance of individual key features, +.>First->The Euclidean distance of the key features;
the migration submodule is used for constructing an adjacency graph according to the similarity matrix, solving selection features based on the adjacency graph, and carrying out migration mapping processing on the selection features to obtain processing test features;
the migration submodule includes:
the adjacency unit is used for constructing an adjacency graph based on the key features and determining a diagonal matrix C corresponding to the adjacency graph:
in the method, in the process of the invention,indicate->Similarity matrix corresponding to each key feature, +.>Representing the number of key features;
a feature determination unit for determining the Laplacian matrix of the adjacency graph based on the diagonal matrix C And according to the Laplace matrix +.>Solving to obtain a plurality of selection features->
In the method, in the process of the invention,is characteristic value (I)>For key features, ->Transposed matrix for key features;
an error calculation unit for migration-mapping the selected feature into a feature view and calculating a mapping error of the selected feature
In the method, in the process of the invention,for selecting the number of features +.>For mapping the previous selection feature +.>For the selection feature after mapping, +.>A mapping matrix for feature migration;
the error comparison unit is used for selecting the selection characteristic corresponding to the migration error smaller than the error threshold as a processing test characteristic set;
the conversion module comprises:
the extraction sub-module is used for extracting a positive sample data set of the processing test feature set, selecting grouping features in the positive sample data set, segmenting the positive sample data set based on the grouping features to obtain a plurality of sub-data sets, and selecting sequence features in each sub-data set;
the combination determination submodule is used for calculating the difference degree and the coverage rate based on the grouping feature and the sequence feature, and carrying out feature grouping based on the difference degree and the coverage rate so as to obtain a plurality of feature combinations consisting of grouping features and sequence features;
A vector generation sub-module for extracting a context sequence in each of the feature combinations based on the sequence features and generating a number of test feature vectors based on the context sequence;
the combination determination submodule includes:
the difference degree calculation unit is used for calculating the distribution area of the sequence features corresponding to each sub-data set, calculating the non-overlapping area proportion of each two sub-data sets, and taking the average value of all the non-overlapping area proportions as the difference degree;
the coverage rate calculation unit is used for calculating the Lorentzian curves of the processing test feature set and the positive sample data set corresponding to the sequence features, and calculating the proportion between the lower areas of the two Lorentzian curves to obtain coverage rate;
the vector generation submodule includes:
a sequence extracting unit, configured to extract a sequence feature corresponding to each sub-dataset in the feature combination as a context sequence;
the incidence matrix generation unit is used for calculating the co-occurrence times of each sub-data set under the sequence characteristics in the context sequence, and generating an incidence matrix based on the co-occurrence times;
a folding unit for the incidence matrix There is->Will->Order matrix diagonal matrix->The characteristic values of (2) are sequentially arranged from top left to bottom right from big to small, and the maximum top G characteristic values are selected to obtain a conversion matrix +.>The conversion matrix->Split folding to->Matrix->And +.>Matrix->Transposed matrix of->In (a):
in the method, in the process of the invention,for the first folding matrix->Is a second folding matrix;
a decomposition unit for fixingA loss function is defined, a first folding matrix and a second folding matrix are solved in an iterative mode according to the loss function, the first folding matrix and the second folding matrix are decomposed to obtain a plurality of test feature vectors, and the loss functionThe expression of (2) is:
in the method, in the process of the invention,is the +.>Line->Column element->Is the +.>Line->Column element->For the second folding matrix +.>Line->Column element->Is the +.>All of the elements of the row,first folding matrix->All elements of a column, < >>And->Is a regularization coefficient.
4. A computer comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the advertisement click rate prediction method of any one of claims 1 to 2 when the computer program is executed.
5. A readable storage medium, wherein a computer program is stored on the readable storage medium, which when executed by a processor, implements the advertisement click rate prediction method according to any one of claims 1 to 2.
CN202310667085.8A 2023-06-07 2023-06-07 Advertisement click rate prediction method, system, computer and readable storage medium Active CN116402554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310667085.8A CN116402554B (en) 2023-06-07 2023-06-07 Advertisement click rate prediction method, system, computer and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310667085.8A CN116402554B (en) 2023-06-07 2023-06-07 Advertisement click rate prediction method, system, computer and readable storage medium

Publications (2)

Publication Number Publication Date
CN116402554A CN116402554A (en) 2023-07-07
CN116402554B true CN116402554B (en) 2023-08-11

Family

ID=87018381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310667085.8A Active CN116402554B (en) 2023-06-07 2023-06-07 Advertisement click rate prediction method, system, computer and readable storage medium

Country Status (1)

Country Link
CN (1) CN116402554B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440512A (en) * 2013-09-17 2013-12-11 西安电子科技大学 Identifying method of brain cognitive states based on tensor locality preserving projection
CN103514443A (en) * 2013-10-15 2014-01-15 中国矿业大学 Single sample face identification transfer learning method based on LPP feature extraction
CN106682089A (en) * 2016-11-26 2017-05-17 山东大学 RNNs-based method for automatic safety checking of short message
CN108874914A (en) * 2018-05-29 2018-11-23 吉林大学 A kind of information recommendation method based on the long-pending and neural collaborative filtering of picture scroll
CN109858972A (en) * 2019-02-13 2019-06-07 重庆金窝窝网络科技有限公司 The prediction technique and device of ad click rate
CN111612243A (en) * 2020-05-18 2020-09-01 湖南大学 Traffic speed prediction method, system and storage medium
CN112464638A (en) * 2020-12-14 2021-03-09 上海爱数信息技术股份有限公司 Text clustering method based on improved spectral clustering algorithm
CN113409090A (en) * 2021-07-05 2021-09-17 中国工商银行股份有限公司 Training method, prediction method and device of advertisement click rate prediction model
CN113705772A (en) * 2021-07-21 2021-11-26 浪潮(北京)电子信息产业有限公司 Model training method, device and equipment and readable storage medium
CN114037931A (en) * 2021-10-19 2022-02-11 仲恺农业工程学院 Multi-view discrimination method of self-adaptive weight
CN115018814A (en) * 2022-06-29 2022-09-06 浙江理工大学 Textile flaw image classification method based on intra-class difference suppression discrimination dictionary learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11762730B2 (en) * 2021-01-15 2023-09-19 Adobe Inc. Selection of outlier-detection programs specific to dataset meta-features

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440512A (en) * 2013-09-17 2013-12-11 西安电子科技大学 Identifying method of brain cognitive states based on tensor locality preserving projection
CN103514443A (en) * 2013-10-15 2014-01-15 中国矿业大学 Single sample face identification transfer learning method based on LPP feature extraction
CN106682089A (en) * 2016-11-26 2017-05-17 山东大学 RNNs-based method for automatic safety checking of short message
CN108874914A (en) * 2018-05-29 2018-11-23 吉林大学 A kind of information recommendation method based on the long-pending and neural collaborative filtering of picture scroll
CN109858972A (en) * 2019-02-13 2019-06-07 重庆金窝窝网络科技有限公司 The prediction technique and device of ad click rate
CN111612243A (en) * 2020-05-18 2020-09-01 湖南大学 Traffic speed prediction method, system and storage medium
CN112464638A (en) * 2020-12-14 2021-03-09 上海爱数信息技术股份有限公司 Text clustering method based on improved spectral clustering algorithm
CN113409090A (en) * 2021-07-05 2021-09-17 中国工商银行股份有限公司 Training method, prediction method and device of advertisement click rate prediction model
CN113705772A (en) * 2021-07-21 2021-11-26 浪潮(北京)电子信息产业有限公司 Model training method, device and equipment and readable storage medium
CN114037931A (en) * 2021-10-19 2022-02-11 仲恺农业工程学院 Multi-view discrimination method of self-adaptive weight
CN115018814A (en) * 2022-06-29 2022-09-06 浙江理工大学 Textile flaw image classification method based on intra-class difference suppression discrimination dictionary learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向推荐系统的文本情感分析研究_;魏敏;《中国优秀硕士学位论文全文数据库信息科技辑》(第3期);第26页第2段至第29页第2段 *

Also Published As

Publication number Publication date
CN116402554A (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Zhang et al. Apricot: A weight-adaptation approach to fixing deep learning models
Yang et al. A unified semi-supervised community detection framework using latent space graph regularization
WO2022063151A1 (en) Method and system for relation learning by multi-hop attention graph neural network
CN111950594A (en) Unsupervised graph representation learning method and unsupervised graph representation learning device on large-scale attribute graph based on sub-graph sampling
CN113220886A (en) Text classification method, text classification model training method and related equipment
CN111026544A (en) Node classification method and device of graph network model and terminal equipment
CN109783805B (en) Network community user identification method and device and readable storage medium
CN114048468A (en) Intrusion detection method, intrusion detection model training method, device and medium
Huai et al. Zerobn: Learning compact neural networks for latency-critical edge systems
Pichel et al. A new approach for sparse matrix classification based on deep learning techniques
Samsonov et al. Local-global mcmc kernels: the best of both worlds
CN111241271B (en) Text emotion classification method and device and electronic equipment
CN113516019B (en) Hyperspectral image unmixing method and device and electronic equipment
CN113239697B (en) Entity recognition model training method and device, computer equipment and storage medium
KR20200023695A (en) Learning system to reduce computation volume
Singh et al. Feature selection using harmony search for script identification from handwritten document images
Chen et al. Bayesian low-rank matrix completion with dual-graph embedding: Prior analysis and tuning-free inference
US10956129B1 (en) Using genetic programming to create generic building blocks
CN116402554B (en) Advertisement click rate prediction method, system, computer and readable storage medium
Zhu et al. A hybrid model for nonlinear regression with missing data using quasilinear kernel
Scrucca et al. Projection pursuit based on Gaussian mixtures and evolutionary algorithms
Kapelner et al. Bartmachine: A powerful tool for machine learning
Ventola et al. Residual sum-product networks
Chaudhuri et al. Functional Test Generation for AI Accelerators using Bayesian Optimization∗
Zhang et al. MTSCANet: Multi temporal resolution temporal semantic context aggregation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant