CN111709583A - User retention time generation method and device, electronic equipment and medium - Google Patents

User retention time generation method and device, electronic equipment and medium Download PDF

Info

Publication number
CN111709583A
CN111709583A CN202010561016.5A CN202010561016A CN111709583A CN 111709583 A CN111709583 A CN 111709583A CN 202010561016 A CN202010561016 A CN 202010561016A CN 111709583 A CN111709583 A CN 111709583A
Authority
CN
China
Prior art keywords
feature data
processing result
data
processing
characteristic data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010561016.5A
Other languages
Chinese (zh)
Other versions
CN111709583B (en
Inventor
孙振邦
周杰
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010561016.5A priority Critical patent/CN111709583B/en
Publication of CN111709583A publication Critical patent/CN111709583A/en
Application granted granted Critical
Publication of CN111709583B publication Critical patent/CN111709583B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Data Mining & Analysis (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Game Theory and Decision Science (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure discloses a user retention time generation method, a user retention time generation device, an electronic device and a medium. One embodiment of the method comprises: processing numerical characteristic data in the day-level characteristic data of the target user in a preset time period based on a first processing mode to generate a processing result of the numerical characteristic data; processing the list feature data in the day-level feature data based on a second processing mode to generate a processing result of the list feature data; generating a processing result of the day-level feature data based on a third processing mode, a processing result of the numerical feature data, and a processing result of the list feature data; and generating user retention time based on the processing result of the day characteristic data. The embodiment adopts various modes to determine the retention time of the user in the application, so that the determination of the retention time of the user is more accurate and convenient.

Description

User retention time generation method and device, electronic equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a user retention time generation method, a user retention time generation device, electronic equipment and a medium.
Background
Currently, enterprises providing applications are increasingly concerned about the time a user uses an application in a predetermined time in the future, i.e., the user retention time. The user retention time determination reflects the popularity of the application to the side. However, the current related method for determining the user retention time usually adopts a single algorithm model, so that the determined result often has a larger deviation from the actual value.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a user retention time generation method, apparatus, electronic device and medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a user retention time generation method, the method comprising: processing numerical characteristic data in the day-level characteristic data of the target user in a preset time period based on a first processing mode to generate a processing result of the numerical characteristic data; processing the list feature data in the day-level feature data based on a second processing mode to generate a processing result of the list feature data; generating a processing result of the day-level feature data based on a third processing mode, a processing result of the numerical feature data, and a processing result of the list feature data; and generating user retention time based on the processing result of the day characteristic data.
In a second aspect, some embodiments of the present disclosure provide an apparatus for user retention time generation, the apparatus comprising: the first generation unit is configured to process numerical characteristic data in day-level characteristic data of a target user in a preset time period based on a first processing mode to generate a processing result of the numerical characteristic data; a second generation unit configured to generate a processing result of the list feature data by processing the list feature data in the day-level feature data based on a second processing method; a third generation unit configured to generate a processing result of the day characteristic data based on a third processing method, a processing result of the numerical characteristic data, and a processing result of the list characteristic data; and the fourth generation unit is configured to generate the user retention time based on the processing result of the day characteristic data.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method as described in the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method as described in the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: and respectively processing the numerical characteristic data of the target user and the list characteristic data of the target user by adopting a plurality of modes. Finally, the retention time of the target user is estimated by integrating results processed in various modes, so that the obtained retention time of the user is more accurate. Therefore, an effective prediction means of the retention condition of the user in the future preset time is provided.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of one application scenario of a user retention time generation method, according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a user retention time generation method according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of data normalization, according to some embodiments of the present disclosure;
FIG. 4 is a schematic diagram of one application scenario of a rewrite relationship correspondence table, in accordance with some embodiments of the present disclosure;
FIG. 5 is a flow diagram of further embodiments of a user retention time generation method according to the present disclosure;
FIG. 6 is a model training diagram of a deep neural network according to further embodiments of the user retention time generation method of the present disclosure;
FIG. 7 is a block diagram of some embodiments of a user retention time generation apparatus according to the present disclosure;
FIG. 8 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 is a schematic diagram of one application scenario of a user retention time generation method according to some embodiments of the present disclosure.
As shown in fig. 1, a computing device 101 may process numerical characteristic data 102 according to a first processing method 103, and may obtain a processing result 104 of the numerical characteristic data. The computing device 101 may process the list characteristic data 105 according to the second processing method 106, and may obtain a processing result 107 of the list characteristic data. Then, the computing device 101 may process the processing result 104 of the numerical characteristic data and the processing result 107 of the list characteristic data according to the third processing manner, to obtain a processing result of the day-level characteristic data, as indicated by reference numeral 108. Thereafter, the computing device 101 may generate the user retention time 109 from the processing results of the day-level feature data.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be implemented as multiple pieces of software or software modules, for example, to provide distributed services, or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of a user retention time generation method in accordance with the present disclosure is shown. The user retention time generation method comprises the following steps:
step 201, processing numerical characteristic data in the day-level characteristic data of the target user within a preset time period based on the first processing mode, and generating a processing result of the numerical characteristic data.
In some embodiments, as an example, an executing agent of the user retention time generation method (e.g., the computing device 101 shown in fig. 1) may input the numerical characteristic data into a numerical processing deep neural network, resulting in a processing result of the numerical characteristic data. The result of the processing of the numerical characteristic data may be a two-dimensional matrix of 512 × 256. The numerical processing deep Neural Network may be a Recurrent Neural Network (RNN) in which sample numerical feature data is input and a processing result of the sample numerical feature data is output as desired. Here, the deep neural network may be a network structure including a plurality of hidden layers. The deep neural network comprising the multilayer hidden layers can greatly improve the learning ability of the deep neural network. In addition, the deep neural network adopts a many-to-one network structure. A many-to-one network architecture means that the network architecture has multiple inputs and one output. Wherein, the day-level feature data may include: list feature data and numerical feature data. Here, the first processing method may be a feature extraction algorithm. The feature extraction algorithm may include, but is not limited to, at least one of: an integrated model, a deep neural network, an SVM (Support Vector Machine), a K nearest neighbor algorithm, a decision tree and naive Bayes.
The numerical characteristic data includes characteristic data that can be expressed using numerical values. In practice, the number of articles displayed on an application of which the stay time of a target user exceeds a predetermined time, the time of the target user using the application, the number of times of opening the application by the target user in a target time period, whether the target user uses an account to log in the application, the target user clicking the number of the articles displayed on the application, the number of the articles pushed to the target user by the application, the number of the articles displayed on the application which are completely read by the target user, the number of times of operations of the target user in the application, and the number of the articles displayed on the application which are shared by the target user. Each of the above data may be numerical characteristic data.
For example, the executing entity may input the numerical feature data into a pre-trained SVM, and generate the processed numerical feature data.
In some optional implementation manners of some embodiments, the processing the numerical characteristic data in the day-level characteristic data of the target user within a preset time period to generate a processing result of the numerical characteristic data includes: carrying out data standardization on the numerical characteristic data to obtain first numerical characteristic data; and inputting the first numerical characteristic data into a first deep neural network to obtain a processing result of the numerical characteristic data. Here, the data normalization may be processing the numerical characteristic data by a preset algorithm.
As an example, for data "the number of articles displayed on an application for which the user stays for more than a predetermined time" in numerical feature data in the obtained user feature: 50' for data standardization, the processing procedure is as follows: firstly, subtracting 37 from the data value to obtain a value 13; secondly, dividing the obtained numerical value 13 by the standard deviation 48 to obtain a result of 0.27; and thirdly, taking the obtained result as the normalized numerical characteristic data.
As an example, in the application scenario of fig. 3, the computing device 101 may perform data normalization on the to-be-processed numerical feature data 301 in the obtained user features to obtain processed numerical feature data 302.
Step 202, processing the list feature data in the day-level feature data based on the second processing mode, and generating a processing result of the list feature data.
In some embodiments, as an example, the executing entity may input the list feature data to the second deep neural network, and obtain a processing result of the list feature data. The result of the processing of the list feature data may be a two-dimensional matrix of 512 x 160. The second deep neural network may be a recurrent neural network in which the sample list feature data is input and the processing result of the sample list numerical feature data is output as desired. Here, the second processing method may be a feature extraction algorithm. The feature extraction algorithm described above may also include, but is not limited to, at least one of: an integrated model, a deep neural network, an SVM (Support Vector Machine), a K nearest neighbor algorithm, a decision tree and naive Bayes.
In some optional implementation manners of some embodiments, the processing the list feature data in the day-level feature data to generate a processing result of the list feature data includes: carrying out integer rewriting on the list characteristic data to obtain first list characteristic data; and inputting the first list feature data into a second deep neural network to obtain a processing result of the list feature data. Wherein the second deep neural network comprises an embedded layer, a third fully connected network and a fourth fully connected network.
The list feature data may include feature data (e.g., city, gender, mobile phone model, etc.) represented by an enumeration. In practice, the gender displayed by the target user on the application, the province displayed by the target user on the application, the city in which the target user lives displayed on the application, and the city in which the target user lives displayed on the application. Each of the above data may be list characteristic data.
It is emphasized that before the above list feature data is used, it is often necessary to use a number for each list feature data. For example, for gender, we use 0 for female, 1 for male; for the region where the target user is located, we can use 0 for Beijing, 1 for Shanghai, 2 for Guangzhou, etc.
In some optional implementation manners of some embodiments, the integer rewriting may be a method for processing the text information according to a predetermined rewriting relationship correspondence table. As an example, the rewriting relationship correspondence table may refer to fig. 4.
Step 203, generating a processing result of the day characteristic data based on a third processing mode, the processing result of the numerical characteristic data and the processing result of the list characteristic data.
In some embodiments, as an example, the third processing manner may be a splicing manner. The execution main body can splice the processing result of the numerical characteristic data and the processing result of the list characteristic data to obtain a spliced characteristic result. The execution body may determine the splicing feature result as a processing result of the day-level feature data.
And step 204, generating user retention time based on the processing result of the day-level feature data.
In some embodiments, the execution subject may input the processing result of the day characteristic data to the third deep neural network to obtain the user retention time. The third deep neural network may be a recurrent neural network in which the processing result of the sample day-level feature data is used as an input, and the retention time of the sample user is used as an expected output. The third deep neural network may include, but is not limited to, at least one of: a fifth fully connected layer, the active layer. As an example, the activation layer may be an activation function (e.g., a Sigmoid activation function). The above-described fully connected layer functions to reduce data dimensionality. The activation layer functions to normalize the data to between 0-1. Here, the number between 0 and 1 represents the proportion of days of use by the user in the predetermined time in the future.
In some optional implementation manners of some embodiments, the processing of the interplanetary feature data may remove unit limitation of the data. The method provides convenience for the generation process and reduces the time consumption of the task of generating the user retention time. The deep learning network comprising the multilayer full-connection network can comprehensively learn the characteristics of the user in the historical preset time period, and the error caused by omission of the characteristics in the generation process is avoided.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: and respectively processing the numerical characteristic data of the target user, the list characteristic data of the target user and the day-level characteristic data of the target user by adopting a plurality of modes. Finally, the retention time of the target user is estimated by integrating results processed in various modes, so that the obtained retention time of the user is more accurate. Therefore, an effective prediction means of the retention condition of the user in the future preset time is provided.
With continued reference to FIG. 5, a flow 500 of further embodiments of a user retention time generation method according to the present disclosure is shown. The user retention time generation method comprises the following steps:
step 501, performing data standardization on numerical characteristic data in the day-level characteristic data of the target user in a preset time period to obtain first numerical characteristic data.
In some embodiments, the execution subject of the user retention time generation method may perform data normalization on numerical characteristic data in day-level characteristic data of the target user within a preset time period to obtain first numerical characteristic data.
In some optional implementations of some embodiments, the data distribution after the data normalization conforms to a standard normal distribution, that is, the mean value is 0 and the standard deviation is 1. The normalized conversion function is X ═ X- μ)/σ, where X is normalized data, X is user feature data before normalization, μ is a mean value of the user feature data, and σ is a standard deviation of the user feature data. As an example, for the number of articles whose stay time of the user exceeds the predetermined time, the average value is 37 and the standard deviation is 48 over all the user groups. Before inputting the data into the network, 37 is subtracted from the data, and the result is divided by 48, and then the data is inputted into the first deep neural network.
Step 502, inputting the first numerical characteristic data into a first fully-connected network of the first deep neural network to obtain a first output result.
In some embodiments, the execution subject may input the first numerical characteristic data to a first fully-connected network of a first deep neural network to obtain a first output result. The first output result may be a 512 x 512 two-dimensional matrix. The first fully connected network is used for extracting more user characteristic information. Here, the activation function used by the activation layer in the first fully-connected network may be a Linear rectification function (ReLU).
Step 503, inputting the first output result to the second fully-connected network of the first deep neural network to obtain a processing result of the numerical characteristic data.
In some embodiments, the execution subject may input the first output result to a second fully-connected network of the first deep neural network to obtain a numerical feature output result. The numerical feature output result may be a two-dimensional matrix of 512 x 256. Here, the activation function employed by the activation layer in the second fully-connected network may be a linear rectification function. Random deactivation (dropout) may be employed in the second fully connected network to prevent overfitting. In particular, the above-mentioned use of random inactivation in the second fully-connected network may be discarding a certain proportion of neurons in a hidden layer in the second fully-connected network during training. During the verification process, all neurons are kept and are not discarded.
Step 504, the list feature data in the above-mentioned day-level feature data is rewritten by an integer to obtain the first list feature data.
In some embodiments, the execution body may perform integer rewriting on the list feature data in the day-level feature data to obtain the first list feature data.
And 505, inputting the first list feature data into an embedding layer of the second deep neural network to obtain an output result of the embedding layer.
In some embodiments, the execution agent may input the first list of feature data to an embedding layer of a second deep neural network, resulting in an embedding layer output result. The embedded layer output may be a 512 by 5 by 16 three dimensional matrix. The embedding layer functions to convert the first list of feature data into a vector.
Step 506, inputting the output result of the embedding layer into a third full-connection network of the second deep neural network to obtain a third output result.
In some embodiments, the execution agent may input the embedding layer output result to a third fully-connected network of the second deep neural network to obtain a third output result. The third output result may be a 512 by 5 by 24 three-dimensional matrix.
And step 507, inputting the third output result to a fourth full-connection network of the second deep neural network.
In some embodiments, the execution subject may input the third output result to a fourth fully-connected network of the second deep neural network.
And step 508, reorganizing data of the output result of the fourth fully-connected network to obtain a processing result of the list feature data.
In some embodiments, the execution subject may perform data reorganization on the output result of the fourth fully-connected network to obtain a processing result of the list feature data. The result of the processing of the list feature data may be a two-dimensional matrix of 512 x 160. Data reorganization is the adjustment of the dimensionality of the data. The purpose of the data reorganization is to ensure the consistency of the front and back data dimensions.
Step 509, based on the third processing method, the processing result of the numerical characteristic data, and the processing result of the list characteristic data, generates a processing result of the day characteristic data.
Step 510, generating a user retention time based on the processing result of the day-level feature data.
In some embodiments, the specific implementation and technical effects of steps 509-510 can refer to steps 203-204 in those embodiments corresponding to fig. 2, which are not described herein again.
As can be seen from fig. 5, compared with the description of some embodiments corresponding to fig. 2, the flow 500 of the user retention time generation method in some embodiments corresponding to fig. 5 more precisely highlights the structure and the specific implementation steps of the first deep neural network including the two-layer fully-connected network and the second deep neural network including the embedded layer and the multi-layer fully-connected network. Thus, the solutions described in the embodiments can embody the advantage of using the multi-layer fully-connected network of the deep neural network to accurately determine the time of using the application by the user in the future.
With continued reference to fig. 6, a model training diagram 600 of a deep neural network is shown, according to further embodiments of the user retention time generation method of the present disclosure, training of a network model of the user retention time generation method, comprising:
in the first step, the number of batches (batch _ size) is set to 512.
I.e. 512 user feature data are entered for training at a time. Each user used the 50-day history feature, and the initial feature that could be obtained was 456. As an example, the initial characteristics of the user may include 50 x 9 numerical characteristic data, 1 static numerical characteristic data (e.g., age), and 5 list characteristic data. Thus, the data at the input layer 601 is a 2-dimensional matrix with data dimensions of 512 x 456.
Second, the numerical characteristic data and the list characteristic data in the input layer 601 are input respectively.
Thirdly, the numerical characteristic data 602 is input to the first fully-connected network 603, and a first output result is obtained, where the data dimension of the first output result is 512 × 512.
Fourthly, the first output result is input to the second fully-connected network 604, and a numerical feature output result is obtained, wherein the data dimension of the numerical feature output result is 512 × 256.
Fifthly, the list feature data 605 is input into the embedding layer 606, and the output result of the embedding layer is obtained, wherein the data dimension of the output result of the embedding layer is 512 × 5 × 16.
Sixth, the output result of the embedding layer is input to a third fully-connected network 607, and a third output result with a data dimension of 512 × 5 × 64 is obtained.
Seventh, the third output result is input to a fourth fully-connected network 608, and an output result with dimension 512 × 5 × 32 is obtained. And reorganizing the data of the output result to obtain a list feature output result. The dimension of the above list feature output result is 512 × 160.
And seventhly, splicing the numerical characteristic output result and the list characteristic output result to obtain a spliced characteristic output result, as shown by reference numeral 609.
And step eight, inputting the output result of the splicing characteristics into a fifth fully-connected network to obtain an output result with data dimension of 512 x 4, wherein the output result represents the time of the user in a preset time.
Here, the trained optimizer of the network model may include, but is not limited to, at least one of: batch Gradient Descent (BGD, Batch Gradient Descent), random Gradient Descent (SGD, Stochastic Gradient Descent), small-Batch Gradient Descent (MBGD, Mini-Batch Gradient Descent), Momentum optimization Algorithm (Momentum), Adaptive learning rate optimization Algorithm (AdaGrad, Adaptive Gradient Algorithm), Adaptive learning rate optimization Algorithm (AdaGrad, Adaplative optimization Algorithm). In addition, the learning rate of the training of the network model may be 0.01. A random inactivation (dropout) layer can be added in a model training stage of the deep neural network, and the random inactivation layer can be removed in a verification stage.
In some optional implementations of some embodiments, the fusion of the user's static numerical feature data and list feature data is achieved by utilizing a deep neural network. Therefore, the time of the user using the user in the preset time can be estimated more accurately, and the accuracy of the task is improved.
With continuing reference to fig. 7, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of a user retention time generation apparatus, which correspond to those method embodiments described above for fig. 2, and which may be particularly applicable in various electronic devices.
As shown in fig. 7, the user retention time generation apparatus 700 of some embodiments comprises: a first generation unit 701, a second generation unit 702, a third unit 703 and a fourth generation unit 704. The first generating unit 701 is configured to process numerical characteristic data in day-level characteristic data of a target user within a preset time period based on a first processing mode, and generate a processing result of the numerical characteristic data; a second generating unit 702 configured to generate a processing result of the list feature data by processing the list feature data in the day-level feature data based on a second processing method; a third generating unit 703 configured to generate a processing result of the day characteristic data based on a third processing method, a processing result of the numerical characteristic data, and a processing result of the list characteristic data; a fourth generating unit 704 configured to generate the user retention time based on the processing result of the day-level feature data.
In some optional implementations of some embodiments, the first generating unit 701 of the user remaining time generating apparatus 700 is further configured to: carrying out data standardization on the numerical characteristic data to obtain first numerical characteristic data; and inputting the first numerical characteristic data into a first deep neural network to obtain a processing result of the numerical characteristic data, wherein the first deep neural network comprises a first fully-connected network and a second fully-connected network.
In some optional implementations of some embodiments, the second generating unit 702 of the user retention time generating apparatus 700 is further configured to: carrying out integer rewriting on the list characteristic data to obtain first list characteristic data; and inputting the first list feature data into a second deep neural network to obtain a processing result of the list feature data, wherein the second deep neural network comprises an embedded layer, a third fully-connected network and a fourth fully-connected network.
In some optional implementations of some embodiments, the inputting the first numerical characteristic data to a first deep neural network to obtain a processing result of the numerical characteristic data includes: inputting the first numerical characteristic data into the first fully-connected network to obtain a first output result; and determining the second output result as a processing result of the numerical characteristic data, wherein random inactivation is adopted in the second fully-connected network in the training process of the first deep neural network.
In some optional implementations of some embodiments, the inputting the first list feature data to a second deep neural network to obtain a processing result of the list feature data includes: inputting the first list feature data into the embedding layer to obtain an embedding layer output result; inputting the output result of the embedding layer to the third fully-connected network to obtain a third output result; inputting the third output result to the fourth fully-connected network, wherein random deactivation is employed in the fourth fully-connected network during model training of the second deep neural network; and reorganizing data of the output result of the fourth fully-connected network to obtain a processing result of the list feature data.
In some optional implementations of some embodiments, the third generating unit 703 of the user retention time generating apparatus 700 is further configured to: splicing the processing result of the numerical characteristic data and the processing result of the list characteristic data to obtain a splicing result; and determining the splicing result as the processing result of the day-level feature data.
In some optional implementations of some embodiments, the fourth generating unit 704 of the user retention time generating apparatus 700 is further configured to: and inputting the processing result of the day-level feature data into a third deep neural network to obtain the user retention time, wherein the third deep neural network comprises a fifth fully-connected network.
It will be understood that the elements described in the apparatus 700 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 700 and the units included therein, and will not be described herein again.
Referring now to FIG. 8, a block diagram of an electronic device (e.g., the computing device of FIG. 1) 800 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 8 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through communications device 809, or installed from storage device 808, or installed from ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: processing numerical characteristic data in the day-level characteristic data of the target user in a preset time period based on a first processing mode to generate a processing result of the numerical characteristic data; processing the list feature data in the day-level feature data based on a second processing mode to generate a processing result of the list feature data; generating a processing result of the day-level feature data based on a third processing mode, a processing result of the numerical feature data, and a processing result of the list feature data; and generating user retention time based on the processing result of the day characteristic data.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first generation unit, a second generation unit, a third unit, and a fourth generation unit. The names of these units do not limit the units themselves in some cases, and for example, the first generation unit may also be described as "a unit that generates a processing result of numerical feature data by processing the numerical feature data in the day-level feature data of the target user in a preset time period based on the first processing manner".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
According to one or more embodiments of the present disclosure, there is provided a user retention time generation method including: processing numerical characteristic data in the day-level characteristic data of the target user in a preset time period based on a first processing mode to generate a processing result of the numerical characteristic data; processing the list feature data in the day-level feature data based on a second processing mode to generate a processing result of the list feature data; generating a processing result of the day-level feature data based on a third processing mode, a processing result of the numerical feature data, and a processing result of the list feature data; and generating user retention time based on the processing result of the day characteristic data.
According to one or more embodiments of the present disclosure, the processing the numerical characteristic data in the day-level characteristic data of the target user within the preset time period to generate the processing result of the numerical characteristic data includes: carrying out data standardization on the numerical characteristic data to obtain first numerical characteristic data; and inputting the first numerical characteristic data into a first deep neural network to obtain a processing result of the numerical characteristic data, wherein the first deep neural network comprises a first fully-connected network and a second fully-connected network.
According to one or more embodiments of the present disclosure, the processing the list feature data in the day-level feature data to generate the processing result of the list feature data includes: carrying out integer rewriting on the list characteristic data to obtain first list characteristic data; and inputting the first list feature data into a second deep neural network to obtain a processing result of the list feature data, wherein the second deep neural network comprises an embedded layer, a third fully-connected network and a fourth fully-connected network.
According to one or more embodiments of the present disclosure, the inputting the first numerical characteristic data into a first deep neural network to obtain a processing result of the numerical characteristic data includes: inputting the first numerical characteristic data into the first fully-connected network to obtain a first output result; and determining the second output result as a processing result of the numerical characteristic data, wherein random inactivation is adopted in the second fully-connected network in the training process of the first deep neural network.
According to one or more embodiments of the present disclosure, the inputting the first list feature data into a second deep neural network to obtain a processing result of the list feature data includes: inputting the first list feature data into the embedding layer to obtain an embedding layer output result; inputting the output result of the embedding layer to the third fully-connected network to obtain a third output result; inputting the third output result to the fourth fully-connected network, wherein random deactivation is employed in the fourth fully-connected network during model training of the second deep neural network; and reorganizing data of the output result of the fourth fully-connected network to obtain a processing result of the list feature data.
According to one or more embodiments of the present disclosure, the generating a processing result of the day characteristic data based on the third processing mode, the processing result of the numerical characteristic data, and the processing result of the list characteristic data includes: splicing the processing result of the numerical characteristic data and the processing result of the list characteristic data to obtain a splicing result; and determining the splicing result as the processing result of the day-level feature data.
According to one or more embodiments of the present disclosure, the generating the user retention time based on the processing result of the day-level feature data includes: and inputting the processing result of the day-level feature data into a third deep neural network to obtain the user retention time, wherein the third deep neural network comprises a fifth fully-connected network.
According to one or more embodiments of the present disclosure, there is provided a user retention time generation apparatus including: the first generation unit is configured to process numerical characteristic data in day-level characteristic data of a target user in a preset time period based on a first processing mode to generate a processing result of the numerical characteristic data; a second generation unit configured to generate a processing result of the list feature data by processing the list feature data in the day-level feature data based on a second processing method; a third generation unit configured to generate a processing result of the day characteristic data based on a third processing method, a processing result of the numerical characteristic data, and a processing result of the list characteristic data; and the fourth generation unit is configured to generate the user retention time based on the processing result of the day characteristic data.
According to one or more embodiments of the present disclosure, the first generating unit of the user retention time generating apparatus is further configured to: carrying out data standardization on the numerical characteristic data to obtain first numerical characteristic data; and inputting the first numerical characteristic data into a first deep neural network to obtain a processing result of the numerical characteristic data, wherein the first deep neural network comprises a first fully-connected network and a second fully-connected network.
According to one or more embodiments of the present disclosure, the second generating unit of the user retention time generating apparatus is further configured to: carrying out integer rewriting on the list characteristic data to obtain first list characteristic data; and inputting the first list feature data into a second deep neural network to obtain a processing result of the list feature data, wherein the second deep neural network comprises an embedded layer, a third fully-connected network and a fourth fully-connected network.
According to one or more embodiments of the present disclosure, the inputting the first numerical characteristic data into a first deep neural network to obtain a processing result of the numerical characteristic data includes: inputting the first numerical characteristic data into the first fully-connected network to obtain a first output result; and determining the second output result as a processing result of the numerical characteristic data, wherein random inactivation is adopted in the second fully-connected network in the training process of the first deep neural network.
According to one or more embodiments of the present disclosure, the inputting the first list feature data into a second deep neural network to obtain a processing result of the list feature data includes: inputting the first list feature data into the embedding layer to obtain an embedding layer output result; inputting the output result of the embedding layer to the third fully-connected network to obtain a third output result; inputting the third output result to the fourth fully-connected network, wherein random deactivation is employed in the fourth fully-connected network during model training of the second deep neural network; and reorganizing data of the output result of the fourth fully-connected network to obtain a processing result of the list feature data.
According to one or more embodiments of the present disclosure, the third generating unit of the user retention time generating apparatus is further configured to: splicing the processing result of the numerical characteristic data and the processing result of the list characteristic data to obtain a splicing result; and determining the splicing result as the processing result of the day-level feature data.
According to one or more embodiments of the present disclosure, the fourth generating unit of the user retention time generating apparatus is further configured to: and inputting the processing result of the day-level feature data into a third deep neural network to obtain the user retention time, wherein the third deep neural network comprises a fifth fully-connected network.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as described in any of the embodiments above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the embodiments above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A user retention time generation method, comprising:
processing numerical characteristic data in day-level characteristic data of a target user in a preset time period based on a first processing mode to generate a processing result of the numerical characteristic data;
processing list feature data in the day-level feature data based on a second processing mode to generate a processing result of the list feature data;
generating a processing result of the day-level feature data based on a third processing mode, a processing result of the numerical feature data and a processing result of the list feature data;
and generating user retention time based on the processing result of the day characteristic data.
2. The method according to claim 1, wherein the processing numerical characteristic data in the day-level characteristic data of the target user within a preset time period to generate a processing result of the numerical characteristic data comprises:
carrying out data standardization on the numerical characteristic data to obtain first numerical characteristic data;
and inputting the first numerical characteristic data into a first deep neural network to obtain a processing result of the numerical characteristic data, wherein the first deep neural network comprises a first fully-connected network and a second fully-connected network.
3. The method of claim 1, wherein the processing the list feature data in the day-level feature data to generate a processing result of the list feature data comprises:
carrying out integer rewriting on the list characteristic data to obtain first list characteristic data;
and inputting the first list feature data into a second deep neural network to obtain a processing result of the list feature data, wherein the second deep neural network comprises an embedding layer, a third fully-connected network and a fourth fully-connected network.
4. The method of claim 2, wherein the inputting the first numerical characteristic data into a first deep neural network, resulting in a processing result of the numerical characteristic data, comprises:
inputting the first numerical characteristic data into the first fully-connected network to obtain a first output result;
inputting the first output result into the second fully-connected network to obtain a second output result, and determining the second output result as a processing result of the numerical characteristic data, wherein random inactivation is adopted in the second fully-connected network in the training process of the first deep neural network.
5. The method of claim 3, wherein the inputting the first list feature data into a second deep neural network, resulting in a processing result of the list feature data, comprises:
inputting the first list feature data into the embedding layer to obtain an embedding layer output result;
inputting the output result of the embedding layer to the third fully-connected network to obtain a third output result;
inputting the third output result to the fourth fully-connected network, wherein random inactivation was employed in the fourth fully-connected network during model training of the second deep neural network;
and reorganizing data of the output result of the fourth fully-connected network to obtain a processing result of the list feature data.
6. The method according to one of claims 1 to 5, wherein the generating of the processing result of the day-level feature data based on the third processing manner, the processing result of the numerical feature data, and the processing result of the list feature data comprises:
splicing the processing result of the numerical characteristic data and the processing result of the list characteristic data to obtain a splicing result;
and determining the splicing result as a processing result of the day-level feature data.
7. The method of claim 6, wherein generating a user retention time based on the processing results of the day-level feature data comprises:
and inputting the processing result of the day-level feature data into a third deep neural network to obtain the retention time of the user, wherein the third deep neural network comprises a fifth fully-connected network.
8. A user retention time generation apparatus, comprising:
the first generation unit is configured to process numerical characteristic data in day-level characteristic data of a target user in a preset time period based on a first processing mode, and generate a processing result of the numerical characteristic data;
the second generation unit is configured to process the list feature data in the day-level feature data based on a second processing mode and generate a processing result of the list feature data;
a third generation unit configured to generate a processing result of the day-level feature data based on a third processing manner, a processing result of the numerical feature data, and a processing result of the list feature data;
a fourth generation unit configured to generate a user retention time based on a processing result of the day-level feature data.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202010561016.5A 2020-06-18 2020-06-18 User retention time generation method, device, electronic equipment and medium Active CN111709583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010561016.5A CN111709583B (en) 2020-06-18 2020-06-18 User retention time generation method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010561016.5A CN111709583B (en) 2020-06-18 2020-06-18 User retention time generation method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111709583A true CN111709583A (en) 2020-09-25
CN111709583B CN111709583B (en) 2023-05-23

Family

ID=72541247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010561016.5A Active CN111709583B (en) 2020-06-18 2020-06-18 User retention time generation method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111709583B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034861A (en) * 2018-06-04 2018-12-18 挖财网络技术有限公司 Customer churn prediction technique and device based on mobile terminal log behavioral data
CN109635923A (en) * 2018-11-20 2019-04-16 北京字节跳动网络技术有限公司 Method and apparatus for handling data
CN110087280A (en) * 2019-05-14 2019-08-02 重庆邮电大学 A kind of traffic density evaluation method based on beacon message
CN111091182A (en) * 2019-12-16 2020-05-01 北京澎思科技有限公司 Data processing method, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034861A (en) * 2018-06-04 2018-12-18 挖财网络技术有限公司 Customer churn prediction technique and device based on mobile terminal log behavioral data
CN109635923A (en) * 2018-11-20 2019-04-16 北京字节跳动网络技术有限公司 Method and apparatus for handling data
CN110087280A (en) * 2019-05-14 2019-08-02 重庆邮电大学 A kind of traffic density evaluation method based on beacon message
CN111091182A (en) * 2019-12-16 2020-05-01 北京澎思科技有限公司 Data processing method, electronic device and storage medium

Also Published As

Publication number Publication date
CN111709583B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN109902186B (en) Method and apparatus for generating neural network
EP3446260B1 (en) Memory-efficient backpropagation through time
CN112699991A (en) Method, electronic device, and computer-readable medium for accelerating information processing for neural network training
CN109165736B (en) Information processing method and device applied to convolutional neural network
CN112256886B (en) Probability calculation method and device in atlas, computer equipment and storage medium
CN112270200B (en) Text information translation method and device, electronic equipment and storage medium
CN111915480A (en) Method, apparatus, device and computer readable medium for generating feature extraction network
US20200050924A1 (en) Data Processing Method and Apparatus for Neural Network
CN111966361A (en) Method, device and equipment for determining model to be deployed and storage medium thereof
CN111368551A (en) Method and device for determining event subject
CN110689045A (en) Distributed training method and device for deep learning model
CN116403250A (en) Face recognition method and device with shielding
CN111414343B (en) Log writing method, device, electronic equipment and medium
CN109840072B (en) Information processing method and device
CN111709784B (en) Method, apparatus, device and medium for generating user retention time
CN111915689A (en) Method, apparatus, electronic device and computer readable medium for generating objective function
CN110991661A (en) Method and apparatus for generating a model
CN114238611B (en) Method, apparatus, device and storage medium for outputting information
CN111709583B (en) User retention time generation method, device, electronic equipment and medium
CN111709787B (en) Method, device, electronic equipment and medium for generating user retention time
CN112464654A (en) Keyword generation method and device, electronic equipment and computer readable medium
CN113642654A (en) Image feature fusion method and device, electronic equipment and storage medium
CN111709785B (en) Method, apparatus, device and medium for determining user retention time
CN111709786B (en) Method, apparatus, device and medium for generating user retention time
CN111598037B (en) Human body posture predicted value acquisition method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant