CN113098916A - Information pushing method and device based on network behavior data - Google Patents

Information pushing method and device based on network behavior data Download PDF

Info

Publication number
CN113098916A
CN113098916A CN201911338660.XA CN201911338660A CN113098916A CN 113098916 A CN113098916 A CN 113098916A CN 201911338660 A CN201911338660 A CN 201911338660A CN 113098916 A CN113098916 A CN 113098916A
Authority
CN
China
Prior art keywords
data
training
model
network behavior
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911338660.XA
Other languages
Chinese (zh)
Other versions
CN113098916B (en
Inventor
马超
金常佳
赵贵新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Liaoning Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Liaoning Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Liaoning Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201911338660.XA priority Critical patent/CN113098916B/en
Publication of CN113098916A publication Critical patent/CN113098916A/en
Application granted granted Critical
Publication of CN113098916B publication Critical patent/CN113098916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an information pushing method and device based on network behavior data, wherein the method comprises the following steps: acquiring network behavior data of a user, and selecting training data from the network behavior data; dividing the training data into n parts to obtain n parts of sub-training data; setting initial values of model variables of the neural network model; in each training process, n-1 parts of the training set are used as a training set, the other part of the training set is used as a test set, the training set and the test set are selected in a circulating mode, errors obtained by testing are accumulated, if the accumulation result of n times of circulation is not met, the model variable value is updated to conduct the next round of circulating training, the target model variable value and the constructed target neural network model are finally determined to conduct the prediction of the user flow characteristics, and then information pushing is conducted according to the prediction result. Therefore, different model variable values are set, and the test set and the training set are selected in a circulating mode to carry out model training, so that the accuracy of a training result can be improved, and accurate letter pushing is facilitated.

Description

Information pushing method and device based on network behavior data
Technical Field
The invention relates to the technical field of internet, in particular to an information pushing method and device based on network behavior data.
Background
With the push of global informatization, information technology has been developed rapidly, and social computerization has also led to the emergence of large amounts of data from every corner of our lives. The operator accumulates a large amount of user internet behavior data, and under the condition of lacking a powerful tool, the huge scale data can only be expected to be impressive, and the abundant information hidden behind the data cannot be explored by means of manual analysis of human beings. In order to strip information from such a large amount of data, people need to find a reasonable solution to process the information, and therefore, it is very important how to mine the large amount of data generated by the user, complete the user label setting, perform the user portrait, and further complete the information pushing method according to the user behavior characteristics.
In the prior art, it is a common practice to train a prediction model to analyze data and obtain a prediction result. However, the existing data mining method often relies on a large number of training samples with known calibration results to improve the accuracy of model training, thereby increasing the difficulty of model training, and if a large number of training samples with known calibration results do not exist, an accurate prediction model cannot be obtained through training, and the accuracy of information submission by using the prediction model obtained through training can also be reduced.
Disclosure of Invention
In view of the above, the present invention is proposed to provide an information pushing method and apparatus based on network behavior data, which overcomes or at least partially solves the above problems.
According to one aspect of the invention, an information pushing method based on network behavior data is provided, which includes:
step S1, acquiring network behavior data of a user, and selecting training data from the network behavior data; dividing the training data into n parts to obtain n parts of sub-training data;
step S2, setting the initial value of the model variable of the neural network model;
step S3, setting the training times to i, and setting the initial value i to 1;
step S4, selecting 1 unselected part of sub-training data from the n parts of sub-training data as a test data set, and using the rest n-1 parts of sub-training data as a training data set;
step S5, taking the network behavior characteristics of n-1 parts of sub-training data in the training data set as training input data, and taking the network behavior labeling result of the n-1 parts of sub-training data as target output data; training a neural network model by using the training input data, the training output data and the current model variable value to obtain a trained neural network model;
step S6, inputting the network behavior characteristics of 1 part of sub-training data in the test data set as test input data into a trained neural network model for testing, and calculating the error between the test result and the network behavior labeling result of the 1 part of sub-training data; assigning i +1 to i, judging whether i is smaller than n, if so, jumping to execute step S4; if not, go to step S7;
step S7, accumulating the errors of n times of training to obtain the accumulation result corresponding to the current model variable value; judging whether the termination condition is met or not according to the accumulation result, if not, updating the current model variable value to obtain an updated model variable value, and skipping to execute the step S3; if yes, go to step S8;
step S8, inquiring the variable value of the target model and the target neural network model constructed by the variable value; and predicting the user traffic characteristics of the network behavior data by using the target neural network model, and pushing information of the user according to a predicted output result of the traffic characteristics.
According to another aspect of the present invention, there is provided an information pushing apparatus based on network behavior data, including:
the acquisition module is suitable for acquiring network behavior data of a user and selecting training data from the network behavior data; dividing the training data into n parts to obtain n parts of sub-training data;
a first setting module adapted to set initial values of model variables of the neural network model;
the second setting module is suitable for setting the training times to be i, and the initial value i is 1;
a training module, adapted to select 1 part of unselected sub-training data from the n parts of sub-training data as a test data set, and use the remaining n-1 parts of sub-training data as a training data set; taking the network behavior characteristics of n-1 parts of sub-training data in the training data set as training input data, and taking the network behavior labeling result of the n-1 parts of sub-training data as target output data; training a neural network model by using the training input data, the training output data and the current model variable value to obtain a trained neural network model;
the test module is suitable for inputting the network behavior characteristics of 1 part of sub-training data in the test data set as test input data into a trained neural network model for testing, and calculating the error between the test result and the network behavior labeling result of 1 part of sub-training data;
the first judgment module is suitable for assigning i +1 to i, judging whether i is smaller than n or not, and triggering the training module to execute if i is smaller than n; if not, triggering a second judgment module to execute;
the second judgment module is suitable for accumulating the errors of n times of training to obtain an accumulation result corresponding to the current model variable value; judging whether a termination condition is met or not according to the accumulation result, and if not, triggering an updating module to execute; if yes, triggering the query module to execute;
the updating module is suitable for updating the current model variable value to obtain an updated model variable value and triggering the second setting module to execute the updated model variable value;
the query module is suitable for querying the variable value of the target model and the target neural network model constructed by the variable value;
and the pushing module is suitable for predicting the user traffic characteristics of the network behavior data by using the target neural network model and pushing information to the user according to the predicted output result of the traffic characteristics.
According to yet another aspect of the present invention, there is provided a computing device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the information pushing method based on the network behavior data.
According to still another aspect of the present invention, a computer storage medium is provided, where at least one executable instruction is stored in the storage medium, and the executable instruction causes a processor to perform an operation corresponding to the information pushing method based on network behavior data as described above.
According to the information pushing method and device based on the network behavior data, 1 part of data is selected as a test set aiming at each group of model variable value circulation, and the rest n-1 parts are used as a training set for model training, so that the accuracy can be effectively improved for the learning of limited sample data; and before the termination condition is not met, continuously updating the model variable values, training the updated model variable values, and selecting the model variable value which can enable the error to be minimum or the error to be close to 0 from the plurality of groups of model variable values as the final model variable value of the neural network model after the termination condition is reached, so that the accuracy of the training result can be improved, and the accurate information pushing is facilitated.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of an embodiment of the information pushing method based on network behavior data according to the invention;
FIG. 2 is a schematic structural diagram of an embodiment of an information pushing apparatus based on network behavior data according to the present invention;
FIG. 3 shows a schematic diagram of a computing device of an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows a flowchart of an embodiment of the information pushing method based on network behavior data according to the present invention. As shown in fig. 1, the method comprises the steps of:
step S110, acquiring network behavior data of a user, and selecting training data from the network behavior data; the training data is divided into n parts to obtain n parts of sub-training data.
In the invention, the network behavior data of a user is mined to obtain the traffic characteristics of the user. The network behavior data comprises data reflecting internet surfing information of the user and/or information reflecting a place track of the user. And, the traffic characteristics refer to information reflecting the traffic preference of the user, for example, if the user views a video using a large amount of traffic, the user traffic characteristics include a preference video, or may further include a traffic value consumed by the video.
Specifically, after the user network behavior data is obtained, a part of the training data is selected as training data for training to obtain the prediction model, where the part of the training data may only account for a very small proportion of the network behavior data, for example, 10%, and by dividing the training data, n parts of sub-training data may be obtained, and the n parts of sub-training data may serve as n training samples, and may further be used for training. The training data may be divided into n shares on average or randomly, and the game is not limited in the present invention. In practice, the network behavior data is chaotic, and by means of partitioning, a single sample can be obtained, which can be beneficial to training.
In some optional embodiments of the present invention, the behavior field in the network user behavior data includes a behavior occurrence time field, a behavior occurrence location field, an access object field and/or a behavior bitstream field, where a field value in the behavior occurrence time field is time information of an internet behavior, a field value in the behavior occurrence location field is location information of the internet behavior, a field value in the access object field is information of a website and/or a page visited by the internet behavior, and the behavior bitstream field is a traffic value consumed by the internet behavior; and the traffic characteristics comprise a traffic preference type, a traffic value of the preference type and/or a traffic preference place, wherein the traffic preference type is a behavior type of which the traffic usage ratio exceeds a preset ratio, and the behavior type can be, for example, watching video, playing games, reading books, shopping and the like.
In some optional embodiments of the present invention, after the network behavior data of the user is obtained, data level conversion processing is performed on each behavior field value in the network behavior data to obtain converted network behavior data; and each behavior field value in the converted network behavior data is in a preset magnitude range. In the actually obtained original network behavior data, there are often differences in order of magnitude between field values of different behavior fields or the same behavior field of different behaviors, for example, some flow values are kb level, some flow values are G level, and data level conversion is performedAnd (4) processing, namely converting each behavior field value in the network behavior data into a preset magnitude range so as to normalize the data and effectively improve the effect and the accuracy of the subsequent data mining process. The preset magnitude range can be set according to actual requirements. For example, the predetermined magnitude range is [0,10 ]]Then, when it is judged that the behavior field value is not within the preset magnitude range, the behavior field value is multiplied by 10n(where the value of n is determined by the difference between orders of magnitude, e.g. converting thousands of levels to [0,10 ]]Then n equals-3, converting ten thousand levels to [0,10 ]]N is equal to-4), the result of the action field value conversion can be obtained. Then, when selecting the training data, selecting a training data set from the converted network behavior data. It should be noted that, in these alternative embodiments, the network behavior data is subjected to data-level conversion processing, and then, the subsequent training and prediction are both performed using the converted network behavior data, and accordingly, after the prediction result is obtained, the inverse operation of the data-level conversion processing, that is, the multiplication inverse operation, needs to be performed to obtain the true prediction value.
Or after the data level conversion processing, preprocessing the converted network behavior data, including normalization and factorization processes, cleaning and filling missing values in the data, smoothing noise data, and identifying and deleting outliers.
Step S120, setting initial values of model variables of the neural network model.
Before training the neural network model, it is necessary to perform network initialization and set variable values of respective model variables required for the neural network model.
In step S130, the training frequency is set to i, and the initial value i is 1.
In the invention, through batch processing planning, the same training data is averagely divided into n parts of sub-training data, n-1 parts of the sub-training data are used as a training data set in each training process, and the other part of the sub-training data is used as a test data set, so that whether n times of training is finished or not is decided through a training time i, wherein an initial value of the training time i is set to be 1 before the first training.
In step S140, 1 unselected sub-training data is selected from the n sub-training data as a test data set, and the remaining n-1 sub-training data is used as a training data set.
Specifically, 1 part of sub-training data is selected from n parts of sub-sample data in a circulating mode every time to serve as a test data set, the remaining n-1 parts of sub-training data serve as a training data set, and therefore n times of training can be conducted in a circulating mode according to the currently set model variable value, and accurate training results can be obtained by means of limited samples.
For example, if there are 10 sets of sub-training data, the 1 st sub-training data may be selected as the test data set in the 1 st training, the 2 nd sub-training data may be selected as the test data set from the sub-training data (2 nd to 10 th) that has not been selected in the 2 nd training, the 3 rd sub-training data may be selected as the test data set … … from the sub-training data (3 rd to 10 th) that has not been selected in the 3 rd training, and so on, the test data sets in the n training may be selected in a loop, and the training data sets may be obtained accordingly.
Step S150, taking the network behavior characteristics of the n-1 parts of sub-training data in the training data set as training input data, and taking the network behavior labeling result of the n-1 parts of sub-training data as target output data; and training the neural network model by using the training input data, the training output data and the current model variable value to obtain the trained neural network model.
The network behavior feature of each sub-training data is the feature reflected by the field value of each behavior field, and the network behavior labeling result of each sub-training data is the labeling result of the traffic feature of the user, including the labeling of the traffic preference type, the traffic value of the preference type and/or the traffic preference location.
Specifically, after a neural network model is initialized and variable values of model variables are set, network behavior characteristics of n-1 parts of sub-training data in a training data set are used as training input data, network behavior labeling results of the n-1 parts of sub-training data are used as target output data, and then the target output data are input into the neural network model for training, so that a trained neural network model is obtained.
Step S160, inputting the network behavior characteristics of 1 part of sub-training data in the test data set as test input data into a trained neural network model for testing, and calculating the error between the test result and the network behavior labeling result of the 1 part of sub-training data; assigning i +1 to i, judging whether i is smaller than n, and if so, skipping to execute the step S140; if not, go to step S170.
Specifically, for each selected test data set (i.e., 1 part of sub-training data), the test data set is input into the trained neural network model for testing, so as to obtain test output data (i.e., a test result) of the model, and an error between the test result and a network behavior labeling result of the 1 part of sub-training data is calculated, so that n error values can be obtained corresponding to the n selected test data sets. And each time training and testing is completed, the training times i +1 are carried out, the result of i +1 is assigned to i, if the assigned i is smaller than n, the training for n times is not completed, the step S140 is skipped to for the next round of training and testing, if the assigned i is larger than or equal to n, the cyclic training for n times is completed, and the step S170 is executed.
Furthermore, after the test data set is selected circularly and trained to obtain n errors, the gradient is obtained according to the average value of the n errors, so that the gradient is more accurate along the local improvement direction, and the probability of falling into the local minimum value is reduced. Wherein, in the process of data mining by adopting LMS-BP neural network (least mean square-BP neural network), momentum scalar factor eta is added for accelerating convergence rate of algorithmm. The weight value is updated through the use of momentum item factors, in order to use the previous batch in the self-adapting process of the network weight value
Figure BDA0002331657750000081
And
Figure BDA0002331657750000082
by using LMS algorithmAnd adjusting the network weights V and W by the momentum term factor, wherein the specific adjustment rule is as follows:
Figure BDA0002331657750000083
Figure BDA0002331657750000084
wherein
Figure BDA0002331657750000085
The product of (a) is the correction weight of V,
Figure BDA0002331657750000086
is the correction weight of W.
Figure BDA0002331657750000087
And
Figure BDA0002331657750000088
the weights of V and W are updated using momentum term factors, respectively.
Step S170, accumulating the errors of n times of training to obtain an accumulation result corresponding to the current model variable value; judging whether a termination condition is met according to the accumulation result, if not, executing a step S180; if yes, go to step S190.
After n times of cyclic training, the reasonability of the set model variable value is judged by calculating the accumulation result of the errors of n times of tests and according to the accumulation result, wherein the larger the accumulation result of the errors is, the more unreasonable the setting of the model variable value is, and otherwise, the more reasonable the setting is.
Specifically, after each n times of tests are completed, whether a termination condition is met is judged, if the termination condition is met, the model variable value does not need to be updated, and the next n times of circular training is performed on the updated model variable value; if the termination condition is not met, the model variable value needs to be updated, and n times of cyclic training aiming at the updated model variable value is carried out.
In some optional embodiments of the present invention, the determining whether the termination condition is satisfied according to the accumulation result may be: and judging whether the accumulation result is smaller than a preset error accumulation value or not, if so, judging that a termination condition is met, wherein the preset error accumulation value is generally a value close to 0, and when the accumulation result is smaller than the preset error accumulation value, determining that the test error of the neural network model obtained by training according to the current model variable value is extremely small, indicating that a proper model variable value is found, and terminating updating the model variable value and training according to the updated model variable value.
Alternatively, in other alternative embodiments of the present invention, the determining whether the termination condition is satisfied according to the accumulation result may be: and judging whether the time for obtaining the accumulation result reaches the preset timing time, if so, judging that a termination condition is met. In these alternative embodiments, the timing of the termination is defined by a preset timing time, and when the preset timing time is reached, it is assumed that the loop training has been performed for a sufficient number of model variable values from which the target model variable value is selected, at which point the updating of the model variable values may be terminated and the training in accordance with the updated model variable values may be terminated.
It should be noted that, in some other embodiments of the present invention, the above two manners of determining the termination condition may also be used in combination, and when one of the manners is satisfied, the updating of the model variable values is terminated and the loop training is performed.
Step S180: the current model variable value is updated to obtain an updated model variable value, and step S130 is skipped to execute.
Step S190, inquiring variable values of the target model and the target neural network model constructed by the variable values; and predicting the user traffic characteristics of the network behavior data by using the target neural network model, and pushing information to the user according to the predicted output result of the traffic characteristics.
The target model variable value is a model variable value of the neural network model which can be finally used for prediction, and the target neural network model is the neural network model which can be finally used for prediction.
Specifically, when the termination condition is satisfied, a target model variable value is selected from among a plurality of sets of model variable values for which loop training has been performed, wherein if it is determined that the termination condition is satisfied with less than a preset error accumulation value, i.e., if the accumulation result is less than the preset error accumulation value, the model variable value corresponding to the accumulation result is queried and determined as the target model variable value, for example, when the model variable values at which the plurality of model variables are set are [ k1, k2, k3]Then, the accumulated result of the errors obtained after n times of cyclic training is 0 and is less than the preset accumulated error value 10-6Then, the model variable values [ k1, k2, k3 ] corresponding to the error result 0 are obtained]And determining the variable value as the target model variable value. Or, if the termination condition is determined to be satisfied by the time reaching the preset timing time, that is, if the time for obtaining the accumulation result reaches the preset timing time, comparing a plurality of accumulation results corresponding to the plurality of sets of model variable values obtained before the preset timing time is reached to obtain a minimum accumulation result, and inquiring the model variable value corresponding to the minimum accumulation result as the target model variable value, wherein the model variable value with the minimum accumulation result is selected from the plurality of sets of model variable values as the target variable value, and the selected target model variable can reduce errors in subsequent prediction as much as possible and improve the accuracy of data mining.
And after the target model variable value is inquired from the multiple groups of model variable values, the target model variable value is used as a setting value of the model variable of the neural network model, and the neural network model finally used for prediction can be obtained through training, so that the target neural network model is obtained.
Further, after obtaining the target model variable value and the target neural network model, the target neural network model is used to predict the user traffic characteristics of the network behavior data, so as to obtain the traffic preference type, the traffic value of the preference type and/or the traffic preference location, and further, information can be pushed to the user according to the prediction, for example, if the user is predicted to look good at a video, game related information (such as game traffic) can be pushed to the user. In addition, the traffic preference location can reflect the internet access location of the user, and further information pushing can be performed in a targeted manner, for example, pushing campus traffic, subway traffic and the like.
In addition, in some embodiments in which the push information is flow information, before pushing the flow information, it is determined whether a current push time is within a preset time period, if so, the flow compensation data at the current push time is estimated according to a plurality of historical flow data within the preset time period, where the preset time period includes, but is not limited to, holidays or public opinion hotspot event times, such as world cup, national celebration holidays, and the like, and if the current push time is within the preset time period, the flow compensation data at the current push time is estimated according to a plurality of flow data within the preset time period that are closest to the current push time. Correspondingly, when pushing the flow information, the information is pushed to the user according to the predicted output result of the flow characteristic and the flow compensation data at the current pushing moment.
For example, in 2020, national day week flow compensation data f (X)4,X5,X6)X4Is the week data of national celebration in 2019, X5Is the national week flow data of 2018, X6In 2019, after the prediction output result is obtained, the user flow preference type is found to be video watching, and when the national day week flow is pushed to the user, the video flow can be pushed to the user according to the national day week flow compensation data.
Therefore, the technical effects of the invention can be at least achieved by the scheme of the invention:
firstly, before data training is started, the magnitude order of original data is normalized to enable the values of all data to be in the same magnitude order, and then relevant processing operation is carried out on the obtained data to improve the effect of data preprocessing and effectively improve the effect and accuracy of a subsequent data mining process;
and secondly, introducing a K-fold cross validation cycle for the condition of limited sample data, randomly or averagely dividing the data into K parts, wherein K-1 part is used as a training set, the rest part is used as a test set, repeating the cycle for K times, and circularly selecting the test set, thereby obtaining the average value of K reverse errors and solving the gradient according to the average value, so that the local improvement direction is more accurate, and the precision can be effectively improved for the learning of the limited sample data.
And thirdly, predicting by combining the data reflecting the internet surfing behavior of the user and the internet surfing location, so as to obtain the flow preference type of the user, the flow value of the preference type and/or the flow preference location, and further facilitate accurate flow pushing.
And fourthly, introducing high-flow trend prediction compensation, and performing compensation prediction on the flow of the holiday and public opinion hotspot event period so as to be used for flow pushing, thereby improving the accuracy of flow information pushing.
Fig. 2 is a schematic structural diagram illustrating an embodiment of the information pushing apparatus based on network behavior data according to the present invention. As shown in fig. 2, the apparatus includes:
an obtaining module 200, adapted to obtain network behavior data of a user, and select training data from the network behavior data; dividing the training data into n parts to obtain n parts of sub-training data;
a first setting module 210 adapted to set initial values of model variables of the neural network model;
a second setting module 220, adapted to set the training times to i, where an initial value i is 1;
a training module 230 adapted to select 1 unselected sub-training data from the n sub-training data as a test data set, and to use the remaining n-1 sub-training data as a training data set; taking the network behavior characteristics of n-1 parts of sub-training data in the training data set as training input data, and taking the network behavior labeling result of the n-1 parts of sub-training data as target output data; training a neural network model by using the training input data, the training output data and the current model variable value to obtain a trained neural network model;
the test module 240 is adapted to input the network behavior features of 1 part of sub-training data in the test data set as test input data into a trained neural network model for testing, and calculate an error between a test result and a network behavior labeling result of the 1 part of sub-training data;
the first judgment module 250 is suitable for assigning i +1 to i, judging whether i is smaller than n, and if so, triggering the training module to execute; if not, triggering a second judgment module to execute;
the second judgment module 260 is adapted to accumulate the errors of the n times of training to obtain an accumulation result corresponding to the current model variable value; judging whether a termination condition is met or not according to the accumulation result, and if not, triggering an updating module to execute; if yes, triggering the query module to execute;
the updating module 270 is adapted to update the current model variable value to obtain an updated model variable value, and trigger the second setting module to execute the updated model variable value;
a query module 280 adapted to query the target model variable values and the target neural network model constructed therewith;
the pushing module 290 is adapted to predict the traffic characteristics of the user on the network behavior data by using the target neural network model, and push information to the user according to a predicted output result of the traffic characteristics.
In an optional manner, the apparatus further comprises:
the conversion module is suitable for performing data level conversion processing on each behavior field value in the network behavior data to obtain converted network behavior data; wherein, each behavior field value in the converted network behavior data is in a preset magnitude range;
the acquisition module is further adapted to: and selecting a training data set from the converted network behavior data.
In an alternative, the second determination module is further adapted to: judging whether the accumulation result is smaller than a preset error accumulation value or not, and if so, judging that a termination condition is met; and/or the presence of a gas in the gas,
and judging whether the time for obtaining the accumulation result reaches the preset timing time, if so, judging that a termination condition is met.
In an alternative approach, the query module is further adapted to: and if the accumulation result is smaller than the preset error accumulation value, inquiring the model variable value corresponding to the accumulation result and determining the model variable value as the target model variable value.
In an alternative approach, the query module is further adapted to: if the time for obtaining the accumulation result reaches the preset timing time, comparing a plurality of accumulation results of the plurality of groups of model variable values obtained before the time reaches the preset timing time to obtain the minimum accumulation result; and inquiring the model variable value corresponding to the minimum accumulation result as a target model variable value.
In an alternative manner, the push information is traffic information, and the push module is further adapted to:
judging whether the current pushing moment is within a preset period, if so, estimating the flow compensation data of the current pushing moment according to a plurality of historical flow data within the preset period;
and carrying out flow information pushing on the user according to the predicted output result of the flow characteristics and the flow compensation data at the current pushing moment.
In an optional mode, the behavior field in the network user behavior data comprises a behavior occurrence time field, a behavior occurrence place field, an access object field and/or a behavior bit stream field;
the traffic characteristics include a traffic preference type, a traffic value for the preference type, and/or a traffic preference location.
The embodiment of the invention provides a nonvolatile computer storage medium, wherein at least one executable instruction is stored in the computer storage medium, and the computer executable instruction can execute the information pushing method based on the network behavior data in any method embodiment.
Fig. 3 is a schematic structural diagram of an embodiment of the computing device of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 3, the computing device may include: a processor (processor)302, a communication Interface 304, a memory 306, and a communication bus 308.
Wherein: the processor 302, communication interface 304, and memory 306 communicate with each other via a communication bus 308. A communication interface 304 for communicating with network elements of other devices, such as clients or other servers. The processor 302 is configured to execute the program 310, and may specifically execute relevant steps in the above-described network behavior data based information pushing method embodiment for a computing device.
In particular, program 310 may include program code comprising computer operating instructions.
The processor 302 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 306 for storing a program 310. Memory 306 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 310 may specifically be configured to cause the processor 302 to perform the following operations:
step S1, acquiring network behavior data of a user, and selecting training data from the network behavior data; dividing the training data into n parts to obtain n parts of sub-training data;
step S2, setting the initial value of the model variable of the neural network model;
step S3, setting the training times to i, and setting the initial value i to 1;
step S4, selecting 1 unselected part of sub-training data from the n parts of sub-training data as a test data set, and using the rest n-1 parts of sub-training data as a training data set;
step S5, taking the network behavior characteristics of n-1 parts of sub-training data in the training data set as training input data, and taking the network behavior labeling result of the n-1 parts of sub-training data as target output data; training a neural network model by using the training input data, the training output data and the current model variable value to obtain a trained neural network model;
step S6, inputting the network behavior characteristics of 1 part of sub-training data in the test data set as test input data into a trained neural network model for testing, and calculating the error between the test result and the network behavior labeling result of the 1 part of sub-training data; assigning i +1 to i, judging whether i is smaller than n, if so, jumping to execute step S4; if not, go to step S7;
step S7, accumulating the errors of n times of training to obtain the accumulation result corresponding to the current model variable value; judging whether the termination condition is met or not according to the accumulation result, if not, updating the current model variable value to obtain an updated model variable value, and skipping to execute the step S3; if yes, go to step S8;
step S8, inquiring the variable value of the target model and the target neural network model constructed by the variable value; and predicting the user traffic characteristics of the network behavior data by using the target neural network model, and pushing information of the user according to a predicted output result of the traffic characteristics.
In an alternative, the program 310 causes the processor to:
performing data level conversion processing on each behavior field value in the network behavior data to obtain converted network behavior data; wherein, each behavior field value in the converted network behavior data is in a preset magnitude range;
and selecting a training data set from the converted network behavior data.
In an alternative, the program 310 causes the processor to:
judging whether the accumulation result is smaller than a preset error accumulation value or not, and if so, judging that a termination condition is met; and/or the presence of a gas in the gas,
and judging whether the time for obtaining the accumulation result reaches the preset timing time, if so, judging that a termination condition is met.
In an alternative, the program 310 causes the processor to:
and if the accumulation result is smaller than the preset error accumulation value, inquiring the model variable value corresponding to the accumulation result and determining the model variable value as the target model variable value.
In an alternative, the program 310 causes the processor to:
if the time for obtaining the accumulation result reaches the preset timing time, comparing a plurality of accumulation results of the plurality of groups of model variable values obtained before the time reaches the preset timing time to obtain the minimum accumulation result; and inquiring the model variable value corresponding to the minimum accumulation result as a target model variable value.
In an alternative, where the push information is traffic information, the program 310 causes the processor to:
judging whether the current pushing moment is within a preset period, if so, estimating the flow compensation data of the current pushing moment according to a plurality of historical flow data within the preset period;
and carrying out flow information pushing on the user according to the predicted output result of the flow characteristics and the flow compensation data at the current pushing moment.
In an optional mode, the behavior field in the network user behavior data comprises a behavior occurrence time field, a behavior occurrence place field, an access object field and/or a behavior bit stream field;
the traffic characteristics include a traffic preference type, a traffic value for the preference type, and/or a traffic preference location.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (10)

1. An information pushing method based on network behavior data comprises the following steps:
step S1, acquiring network behavior data of a user, and selecting training data from the network behavior data; dividing the training data into n parts to obtain n parts of sub-training data;
step S2, setting the initial value of the model variable of the neural network model;
step S3, setting the training times to i, and setting the initial value i to 1;
step S4, selecting 1 unselected part of sub-training data from the n parts of sub-training data as a test data set, and using the rest n-1 parts of sub-training data as a training data set;
step S5, taking the network behavior characteristics of n-1 parts of sub-training data in the training data set as training input data, and taking the network behavior labeling result of the n-1 parts of sub-training data as target output data; training a neural network model by using the training input data, the training output data and the current model variable value to obtain a trained neural network model;
step S6, inputting the network behavior characteristics of 1 part of sub-training data in the test data set as test input data into a trained neural network model for testing, and calculating the error between the test result and the network behavior labeling result of the 1 part of sub-training data; assigning i +1 to i, judging whether i is smaller than n, if so, jumping to execute step S4; if not, go to step S7;
step S7, accumulating the errors of n times of training to obtain the accumulation result corresponding to the current model variable value; judging whether the termination condition is met or not according to the accumulation result, if not, updating the current model variable value to obtain an updated model variable value, and skipping to execute the step S3; if yes, go to step S8;
step S8, inquiring the variable value of the target model and the target neural network model constructed by the variable value; and predicting the user traffic characteristics of the network behavior data by using the target neural network model, and pushing information of the user according to a predicted output result of the traffic characteristics.
2. The method of claim 1, wherein after the obtaining network behavior data for the user, the method further comprises:
performing data level conversion processing on each behavior field value in the network behavior data to obtain converted network behavior data; wherein, each behavior field value in the converted network behavior data is in a preset magnitude range;
the selecting of the training data from the network behavior data specifically includes: and selecting a training data set from the converted network behavior data.
3. The method of claim 1 or 2, wherein said determining whether a termination condition is met based on said accumulated result further comprises:
judging whether the accumulation result is smaller than a preset error accumulation value or not, and if so, judging that a termination condition is met; and/or the presence of a gas in the gas,
and judging whether the time for obtaining the accumulation result reaches the preset timing time, if so, judging that a termination condition is met.
4. The method of claim 3, wherein said querying target model variable values and the target neural network model it constructs further comprises:
and if the accumulation result is smaller than the preset error accumulation value, inquiring the model variable value corresponding to the accumulation result and determining the model variable value as the target model variable value.
5. The method of claim 3, wherein said querying target model variable values and the target neural network model it constructs further comprises:
if the time for obtaining the accumulation result reaches the preset timing time, comparing a plurality of accumulation results of the plurality of groups of model variable values obtained before the time reaches the preset timing time to obtain the minimum accumulation result; and inquiring the model variable value corresponding to the minimum accumulation result as a target model variable value.
6. The method of claim 1, wherein the push information is traffic information, and after the predicting of the user traffic characteristics of the network behavior data using the target neural network model, the method further comprises:
judging whether the current pushing moment is within a preset period, if so, estimating the flow compensation data of the current pushing moment according to a plurality of historical flow data within the preset period;
the information pushing of the user according to the predicted output result of the flow characteristics further comprises: and carrying out flow information pushing on the user according to the predicted output result of the flow characteristics and the flow compensation data at the current pushing moment.
7. The method of claim 1, wherein the behavior field in the network user behavior data comprises a behavior occurrence time field, a behavior occurrence place field, an access object field, and/or a behavior bitstream field;
the traffic characteristics include a traffic preference type, a traffic value for the preference type, and/or a traffic preference location.
8. An information pushing device based on network behavior data comprises:
the acquisition module is suitable for acquiring network behavior data of a user and selecting training data from the network behavior data; dividing the training data into n parts to obtain n parts of sub-training data;
a first setting module adapted to set initial values of model variables of the neural network model;
the second setting module is suitable for setting the training times to be i, and the initial value i is 1;
a training module, adapted to select 1 part of unselected sub-training data from the n parts of sub-training data as a test data set, and use the remaining n-1 parts of sub-training data as a training data set; taking the network behavior characteristics of n-1 parts of sub-training data in the training data set as training input data, and taking the network behavior labeling result of the n-1 parts of sub-training data as target output data; training a neural network model by using the training input data, the training output data and the current model variable value to obtain a trained neural network model;
the test module is suitable for inputting the network behavior characteristics of 1 part of sub-training data in the test data set as test input data into a trained neural network model for testing, and calculating the error between the test result and the network behavior labeling result of 1 part of sub-training data;
the first judgment module is suitable for assigning i +1 to i, judging whether i is smaller than n or not, and triggering the training module to execute if i is smaller than n; if not, triggering a second judgment module to execute;
the second judgment module is suitable for accumulating the errors of n times of training to obtain an accumulation result corresponding to the current model variable value; judging whether a termination condition is met or not according to the accumulation result, and if not, triggering an updating module to execute; if yes, triggering the query module to execute;
the updating module is suitable for updating the current model variable value to obtain an updated model variable value and triggering the second setting module to execute the updated model variable value;
the query module is suitable for querying the variable value of the target model and the target neural network model constructed by the variable value;
and the pushing module is suitable for predicting the user traffic characteristics of the network behavior data by using the target neural network model and pushing information to the user according to the predicted output result of the traffic characteristics.
9. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the information pushing method based on the network behavior data in any one of claims 1-7.
10. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the network behavior data based information pushing method according to any one of claims 1 to 7.
CN201911338660.XA 2019-12-23 2019-12-23 Information pushing method and device based on network behavior data Active CN113098916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911338660.XA CN113098916B (en) 2019-12-23 2019-12-23 Information pushing method and device based on network behavior data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911338660.XA CN113098916B (en) 2019-12-23 2019-12-23 Information pushing method and device based on network behavior data

Publications (2)

Publication Number Publication Date
CN113098916A true CN113098916A (en) 2021-07-09
CN113098916B CN113098916B (en) 2023-11-14

Family

ID=76662910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911338660.XA Active CN113098916B (en) 2019-12-23 2019-12-23 Information pushing method and device based on network behavior data

Country Status (1)

Country Link
CN (1) CN113098916B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120020216A1 (en) * 2010-01-15 2012-01-26 Telcordia Technologies, Inc. Cognitive network load prediction method and apparatus
CN103840988A (en) * 2014-03-17 2014-06-04 湖州师范学院 Network traffic measurement method based on RBF neural network
CN106408343A (en) * 2016-09-23 2017-02-15 广州李子网络科技有限公司 Modeling method and device for user behavior analysis and prediction based on BP neural network
CN106649774A (en) * 2016-12-27 2017-05-10 北京百度网讯科技有限公司 Artificial intelligence-based object pushing method and apparatus
CN107527091A (en) * 2016-10-14 2017-12-29 腾讯科技(北京)有限公司 Data processing method and device
CN107992530A (en) * 2017-11-14 2018-05-04 北京三快在线科技有限公司 Information recommendation method and electronic equipment
CN108011740A (en) * 2016-10-28 2018-05-08 腾讯科技(深圳)有限公司 A kind of media flow data processing method and device
CN108737130A (en) * 2017-04-14 2018-11-02 国家电网公司 Predicting network flow device and method based on neural network
CN109492808A (en) * 2018-11-07 2019-03-19 浙江科技学院 A kind of parking garage residue parking stall prediction technique
CN109978575A (en) * 2017-12-27 2019-07-05 中国移动通信集团广东有限公司 A kind of method and device excavated customer flow and manage scene
CN110097170A (en) * 2019-04-25 2019-08-06 深圳市豪斯莱科技有限公司 Information pushes object prediction model acquisition methods, terminal and storage medium
CN110213325A (en) * 2019-04-02 2019-09-06 腾讯科技(深圳)有限公司 Data processing method and data push method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120020216A1 (en) * 2010-01-15 2012-01-26 Telcordia Technologies, Inc. Cognitive network load prediction method and apparatus
CN103840988A (en) * 2014-03-17 2014-06-04 湖州师范学院 Network traffic measurement method based on RBF neural network
CN106408343A (en) * 2016-09-23 2017-02-15 广州李子网络科技有限公司 Modeling method and device for user behavior analysis and prediction based on BP neural network
CN107527091A (en) * 2016-10-14 2017-12-29 腾讯科技(北京)有限公司 Data processing method and device
CN108011740A (en) * 2016-10-28 2018-05-08 腾讯科技(深圳)有限公司 A kind of media flow data processing method and device
CN106649774A (en) * 2016-12-27 2017-05-10 北京百度网讯科技有限公司 Artificial intelligence-based object pushing method and apparatus
CN108737130A (en) * 2017-04-14 2018-11-02 国家电网公司 Predicting network flow device and method based on neural network
CN107992530A (en) * 2017-11-14 2018-05-04 北京三快在线科技有限公司 Information recommendation method and electronic equipment
CN109978575A (en) * 2017-12-27 2019-07-05 中国移动通信集团广东有限公司 A kind of method and device excavated customer flow and manage scene
CN109492808A (en) * 2018-11-07 2019-03-19 浙江科技学院 A kind of parking garage residue parking stall prediction technique
CN110213325A (en) * 2019-04-02 2019-09-06 腾讯科技(深圳)有限公司 Data processing method and data push method
CN110097170A (en) * 2019-04-25 2019-08-06 深圳市豪斯莱科技有限公司 Information pushes object prediction model acquisition methods, terminal and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
劳政萍;: "数据挖掘在互联网流量经营中应用与研究", no. 08 *
王宇飞;沈红岩;: "基于改进广义回归神经网络的网络安全态势预测", no. 03 *
穆桃;陈伟;陈松健;: "基于多层网络流量分析的用户分类方法", no. 03 *
胡洋瑞;陈兴蜀;王俊峰;叶晓鸣;: "基于流量行为特征的异常流量检测", no. 11 *
赵振江;: "基于PSO-BP神经网络的网络流量预测与研究", no. 01 *

Also Published As

Publication number Publication date
CN113098916B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN110366734B (en) Optimizing neural network architecture
US10235403B2 (en) Parallel collective matrix factorization framework for big data
WO2019072107A1 (en) Prediction of spending power
CN109313540B (en) Two-stage training of spoken language dialog systems
CN106445954B (en) Business object display method and device
CN109598566B (en) Ordering prediction method, ordering prediction device, computer equipment and computer readable storage medium
CN109189921B (en) Comment evaluation model training method and device
CN112148557B (en) Method for predicting performance index in real time, computer equipment and storage medium
CN111461445B (en) Short-term wind speed prediction method and device, computer equipment and storage medium
CN109726811A (en) Use priority formation neural network
CN114330863A (en) Time series prediction processing method, device, storage medium and electronic device
CN110462638A (en) Training neural network is sharpened using posteriority
CN111401940A (en) Feature prediction method, feature prediction device, electronic device, and storage medium
CN110633859A (en) Hydrological sequence prediction method for two-stage decomposition integration
CN111768019A (en) Order processing method and device, computer equipment and storage medium
CN103870563B (en) It is determined that the method and apparatus of the theme distribution of given text
CN111191722A (en) Method and device for training prediction model through computer
CN110659954A (en) Cheating identification method and device, electronic equipment and readable storage medium
Shestopaloff et al. On Bayesian inference for the M/G/1 queue with efficient MCMC sampling
CN110543699A (en) shared vehicle travel data simulation and shared vehicle scheduling method, device and equipment
CN112256957B (en) Information ordering method and device, electronic equipment and storage medium
CN111325255B (en) Specific crowd delineating method and device, electronic equipment and storage medium
JP5835802B2 (en) Purchase forecasting apparatus, method, and program
CN111881007B (en) Operation behavior judgment method, device, equipment and computer readable storage medium
CN110348581B (en) User feature optimizing method, device, medium and electronic equipment in user feature group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant