WO2019001185A1 - 信息处理和模型训练方法、装置、电子设备、存储介质 - Google Patents

信息处理和模型训练方法、装置、电子设备、存储介质 Download PDF

Info

Publication number
WO2019001185A1
WO2019001185A1 PCT/CN2018/088249 CN2018088249W WO2019001185A1 WO 2019001185 A1 WO2019001185 A1 WO 2019001185A1 CN 2018088249 W CN2018088249 W CN 2018088249W WO 2019001185 A1 WO2019001185 A1 WO 2019001185A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
pop
popup
user
training set
Prior art date
Application number
PCT/CN2018/088249
Other languages
English (en)
French (fr)
Inventor
黄献德
Original Assignee
北京金山安全软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京金山安全软件有限公司 filed Critical 北京金山安全软件有限公司
Priority to US16/480,925 priority Critical patent/US20200167645A1/en
Publication of WO2019001185A1 publication Critical patent/WO2019001185A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/75Indicating network or usage conditions on the user display

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to an information processing and model training method, apparatus, electronic device, and storage medium.
  • pop-up functions are set on electronic devices such as smart phones, tablets, notebooks, and the like.
  • the electronic device receives the information, the information is displayed through the pop-up window function.
  • the information that needs to be displayed through the pop-up window function may be referred to as pop-up information.
  • An object of the embodiments of the present application is to provide an information processing and model training method, apparatus, electronic device, and storage medium to solve the problem that an electronic device displays popup information that is not of interest to a large number of users.
  • the specific technical solutions are as follows:
  • an embodiment of the present application discloses an information processing method, where the method includes:
  • the pop-up window management model is: a model constructed based on a depth neural network for determining whether the input pop-up window information is pop-up information of interest to the user;
  • the pop-up information that is of interest to the user is information whose degree of attention is greater than a threshold;
  • the output result of the popup management model is information indicating that the to-be-processed popup information is popup information that is of interest to the user, sending the to-be-processed popup information to the target electronic device, so that the target electronic
  • the device displays the to-be-processed pop-up window information through a pop-up function.
  • the method further includes:
  • the refusal to send the to-be-processed popup information to the target electronic device if the output result of the popup management model is information indicating that the to-be-processed popup information is popup information that is not of interest to the user, the refusal to send the to-be-processed popup information to the target electronic device.
  • the pop-up management model is obtained by training in the following manner:
  • the modeling unit of the pop-up window management model is: information indicating whether the pop-up information is of interest to the user;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking, for the pop-up information in the training set, pop-up information that is of interest to the user or a pop-up information that is not of interest to the user;
  • the pop-up management model is trained using the feature vector and the tag.
  • the pop-up management model is obtained by training in the following manner:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, where the label information is information indicating that the popup information is popup information that is of interest to the user or indicates that the popup information is not Information on pop-up information of interest to the user;
  • the parameters of the deep neural network are adjusted, and the adjusted parameters are used as the target parameters, and the execution returns the execution of each popup information included in the training set into the deep neural network to obtain each bullet.
  • the deep neural network using the target parameter will be used as the pop-up management model.
  • the step of converting the popup information in the training set into a feature vector includes:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network includes an input layer, an abstraction layer, and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the dimension of the feature vector;
  • the activation function of the abstraction layer is a ReLu (Rectified Linear Units) function;
  • the output layer The activation function is a sigmoid (S-type) function.
  • the method further includes:
  • the pop-up information that is of interest to the user is popup information viewed by the user.
  • the training set is determined by:
  • the training set is determined according to the received correspondence.
  • an embodiment of the present application discloses a model training method, where the method includes:
  • the modeling unit of the pop-up window management model is: information indicating whether pop-up information is of interest to the user; and the pop-up information of the user's interest is that the degree of attention is greater than a threshold Information;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking, for the pop-up information in the training set, pop-up information that is of interest to the user or a pop-up information that is not of interest to the user;
  • the pop-up management model is trained using the feature vector and the tag.
  • the step of converting the popup information in the training set into a feature vector includes:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network includes an input layer, an abstraction layer, and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that is of interest to the user is popup information viewed by the user.
  • the training set is determined by:
  • the training set is determined according to the received correspondence.
  • an embodiment of the present application discloses a model training method, where the method includes:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, where the label information is information indicating that the popup information is popup information that is of interest to the user or indicates that the popup information is not Information on pop-up information of interest to the user;
  • the parameters of the deep neural network are adjusted, and the adjusted parameters are used as the target parameters, and the execution returns the execution of each popup information included in the training set into the deep neural network to obtain each bullet.
  • the deep neural network using the target parameter will be used as the pop-up management model.
  • the step of converting the popup information in the training set into a feature vector includes:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network includes an input layer, an abstraction layer, and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that is of interest to the user is popup information viewed by the user.
  • the training set is determined by:
  • the training set is determined according to the received correspondence.
  • an embodiment of the present application discloses an information processing apparatus, where the apparatus includes:
  • An input module configured to input the to-be-processed pop-up information into a pop-up management model;
  • the pop-up window management model is: a pop-up window constructed based on a depth neural network for determining whether the input pop-up information is a pop-up window of interest to the user a model of the information;
  • the pop-up information that the user is interested in is information with a degree of attention greater than a threshold;
  • a sending module configured to: if the output of the popup management model is information indicating that the to-be-processed pop-up information is pop-up information that is of interest to the user, send the to-be-processed pop-up information to the target electronic device, And causing the target electronic device to display the to-be-processed pop-up information through a pop-up window function.
  • the device further includes:
  • a rejecting module configured to refuse to send the to-be-processed pop-up information to the target electronic if the output of the pop-up management model is information indicating that the to-be-processed pop-up information is pop-up information that is not of interest to the user device.
  • the device further includes: a training module, configured to obtain the pop-up management model; the training module includes:
  • the modeling unit of the pop-up window management model is: information indicating whether pop-up information is of interest to the user;
  • a conversion sub-module configured to acquire a training set, convert the pop-up information in the training set into a feature vector, mark pop-up information of the user in the training set, or pop-up information that is not of interest to the user label;
  • a training submodule for training the popup management model using the feature vector and the tag is
  • the device further includes: a training module, configured to obtain the pop-up management model; the training module includes:
  • a first acquisition sub-module configured to acquire a training set, where the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, where the label information is popup information indicating that the popup information is of interest to the user. Information or information indicating that the pop-up information is not the pop-up information of interest to the user;
  • a conversion sub-module configured to convert the pop-up information of the training set into a feature vector, and mark, for the pop-up information in the training set, a tag of the tag information corresponding to the pop-up information
  • a second obtaining submodule configured to acquire a preset depth neural network, and initialize a parameter of the deep neural network as a target parameter
  • An input submodule configured to input a feature vector of each popup information included in the training set into the deep neural network, to obtain an output result of each popup information; and outputting the popup information as an indication popup window
  • the information is the information of the pop-up information that the user is interested in or the information indicating that the pop-up information is not the pop-up information that the user is interested in;
  • a calculation submodule configured to calculate a popup information loss value according to an output result of each popup information and label information corresponding to the popup information included in the training set;
  • a determining submodule configured to determine, according to the pop-up information loss value, whether the deep neural network adopting the target parameter converges
  • a processing submodule configured to: if the judgment result of the determining submodule is negative, adjust a parameter of the deep neural network, and use the adjusted parameter as a target parameter; if the judgment result of the determining submodule is yes, A deep neural network using the target parameters is used as a pop-up management model.
  • the conversion submodule is specifically configured to:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network includes an input layer, an abstraction layer, and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the device further includes:
  • the pop-up information that is of interest to the user is popup information viewed by the user.
  • the device further includes: a determining module, configured to determine a training set; the determining module includes:
  • a sending submodule configured to send the obtained multiple popup information to the plurality of electronic devices; so that the plurality of electronic devices display the received popup information through the popup function, and record whether the user views the received popup information;
  • a receiving submodule configured to receive a correspondence between pop-up information returned by the plurality of electronic devices and whether the user views the pop-up information
  • a sub-module is added for determining a training set according to the received correspondence.
  • an embodiment of the present application discloses a model training device, where the device includes:
  • a building module configured to build a pop-up window management model based on a deep neural network;
  • the modeling unit of the pop-up window management model is: information indicating whether pop-up information is of interest to the user; pop-up information of interest to the user Information that is more than a threshold;
  • a conversion module configured to acquire a training set, convert the popup information in the training set into a feature vector, and mark the popup information of the user in the training set or the label of the popup information that is not of interest to the user.
  • a training module for training the pop-up management model using the feature vector and the tag.
  • the conversion module is specifically configured to:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network includes an input layer, an abstraction layer, and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the dimension of the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that is of interest to the user is popup information viewed by the user.
  • the device further includes: a determining module, configured to determine a training set; the determining module includes:
  • a sending submodule configured to send the obtained multiple popup information to the plurality of electronic devices; so that the plurality of electronic devices display the received popup information through the popup function, and record whether the user views the received popup information;
  • a receiving submodule configured to receive a correspondence between pop-up information returned by the plurality of electronic devices and whether the user views the pop-up information
  • a sub-module is added for determining a training set according to the received correspondence.
  • the embodiment of the present application discloses a model training device, where the device includes:
  • a first acquiring module configured to acquire a training set, where the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, where the label information is popup information indicating that the popup information is of interest to the user. Information or information indicating that the popup information is not popup information of interest to the user;
  • a conversion module configured to convert the pop-up information of the training set into a feature vector, and mark, for the pop-up information in the training set, a tag of the tag information corresponding to the pop-up information
  • a second acquiring module configured to acquire a preset depth neural network, and initialize a parameter of the deep neural network as a target parameter
  • An input module configured to input a feature vector of each popup information included in the training set into the deep neural network, to obtain an output result of each popup information; and output an output of each popup information as a popup information Information of popup information that is of interest to the user or information indicating popup information that is not popup information of interest to the user;
  • a calculation module configured to calculate a pop-up information loss value according to an output result of each pop-up information and tag information corresponding to the pop-up information included in the training set;
  • a determining module configured to determine, according to the pop-up information loss value, whether the deep neural network adopting the target parameter converges
  • a processing module configured to: if the judgment result of the determining module is negative, adjust a parameter of the deep neural network, and use the adjusted parameter as a target parameter; if the judgment result of the determining module is yes, the processing module
  • the deep neural network of the target parameters is used as the pop-up management model.
  • the conversion module is specifically configured to:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network includes an input layer, an abstraction layer, and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that is of interest to the user is popup information viewed by the user.
  • the device further includes: a determining module, configured to determine a training set; the determining module includes:
  • a sending submodule configured to send the obtained multiple popup information to the plurality of electronic devices; so that the plurality of electronic devices display the received popup information through the popup function, and record whether the user views the received popup information;
  • a receiving submodule configured to receive a correspondence between pop-up information returned by the plurality of electronic devices and whether the user views the pop-up information
  • a sub-module is added for determining a training set according to the received correspondence.
  • an embodiment of the present application discloses an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory are mutually completed by using the communication bus.
  • the memory is configured to store a computer program
  • the processor is configured to execute a program stored on the memory, and implement any of the information processing method steps disclosed in the foregoing first aspect.
  • an embodiment of the present application discloses an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete each other through the communication bus.
  • the memory is configured to store a computer program
  • the processor is configured to execute a program stored on the memory, and implement any of the model training method steps disclosed in the second aspect.
  • an embodiment of the present application discloses an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory are mutually completed by using the communication bus.
  • the memory is configured to store a computer program
  • the processor is configured to execute a program stored on the memory, and implement any of the model training method steps disclosed in the third aspect.
  • the embodiment of the present application discloses a storage medium, where the computer program stores a computer program, and when the computer program is executed by the processor, implements any information processing method steps disclosed in the above first aspect.
  • an embodiment of the present application discloses a storage medium, where the computer program stores a computer program, and when the computer program is executed by the processor, implements any of the model training method steps disclosed in the second aspect.
  • the embodiment of the present application discloses a storage medium, where the computer program stores a computer program, and when the computer program is executed by the processor, implements any of the model training method steps disclosed in the third aspect.
  • the embodiment of the present application discloses a computer program, where the computer program is executed by a processor to implement any of the information processing method steps disclosed in the above first aspect.
  • the embodiment of the present application discloses a computer program, where the computer program is executed by a processor to implement any of the model training method steps disclosed in the above second aspect.
  • the embodiment of the present application discloses a computer program, where the computer program is executed by a processor to implement any of the model training method steps disclosed in the above third aspect.
  • a pop-up window management model is constructed based on a deep neural network, and the pop-up window management model is used to determine whether the input pop-up information is pop-up information that is of interest to the user; in this case, the obtained pending window is to be processed.
  • the pop-up window information input pop-up window management model if the output result of the pop-up window management model is displayed as: the pop-up window information to be processed is the pop-up window information of interest to the user, and then the information of the pop-up window to be processed is sent to the electronic device, the electronic device The pop-up window information is displayed through the pop-up function.
  • FIG. 1 is a schematic flowchart of a model training method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a pop-up window management model used in the embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for determining a training set according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a first process of an information processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a second process of an information processing method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a first structure of an information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a second structure of an information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a third structure of an information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a first structure of an electronic device according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a second structure of an electronic device according to an embodiment of the present application.
  • Pop-up information Information displayed by the electronic device through the pop-up function. For example, push information sent by a server running the application, an incoming call from another electronic device to an electronic device, or a short message sent by another electronic device to an electronic device.
  • Popup information that is of interest to the user popup information related to the user's behavioral habits is information with a degree of attention greater than a threshold.
  • the degree of attention can be determined by the click frequency. For example, if the user frequently clicks through the shopping webpage and the click frequency is greater than the threshold, the pop-up information related to the shopping may be determined as the pop-up information of the user's interest; for example, the frequency at which the user clicks to view the push information of an application. If the threshold is greater than the threshold, the push information of the application may be determined as the pop-up information that the user is interested in; for example, if the frequency of the unanswered call is greater than the threshold, the call may be determined to be of interest to the user. Pop-up information, etc.
  • pop-up information in the network that users are interested or not interested in. If these pop-up information is sent to the electronic device, the direct result is that the electronic device displays a large number of users who are not interested through the pop-up function.
  • the pop-up information affects the normal use of the electronic device by the user, and the user experience is not good.
  • the embodiment of the present application provides an information processing and model training method, device, electronic device, and storage medium.
  • the information processing and model training method and apparatus can be applied to a cloud server.
  • FIG. 1 is a schematic flowchart of a model training method according to an embodiment of the present application, where the method includes the following steps.
  • S101 Construct a pop-up window management model based on a deep neural network.
  • the modeling unit of the popup management model is: information indicating whether the popup information is of interest to the user, that is, information indicating popup information that is of interest to the user and information indicating popup information that the user is not interested in;
  • the pop-up information that the user is interested in is information with a degree of attention greater than a threshold. For example, if the threshold is 0.8, if the attention of a pop-up information is 0.9, 0.9>0.8, it is determined that the pop-up information is pop-up information of interest to the user.
  • the modeling unit is the type information of the output result obtained after the pop-up window management model processes the input pop-up information.
  • the initial pop-up management model is constructed based on the deep neural network.
  • the pop-up information that is of interest to the user is information with a degree of attention of 1.
  • the user may view the pop-up information. That is, the pop-up information that the user is interested in is the pop-up information that the user views; the pop-up information that the user is not interested in is the information with the degree of attention of 0, and the pop-up information that the user is not interested in, after the electronic device is displayed through the pop-up window function
  • the user does not view the popup information, that is, the popup information that the user is not interested in is the popup information that the user does not view.
  • the user can view the popup information by clicking the popup information, that is, the popup information that the user is interested in is the bullet clicked by the user. Window information.
  • the user can also view the pop-up information in other manners, which is defined by the embodiment of the present application.
  • a deep neural network is composed of multiple neurons and belongs to a type of forward neural network.
  • the deep neural network that constructs the pop-up management model consists of an input layer, an abstraction layer, and an output layer. As shown in Figure 2, each layer has input and output of neurons. The input of neurons is the previous layer of neurons. Output.
  • the abstraction layer is used to parse the feature vector of the input information.
  • the number of neurons deployed in the input layer, the abstraction layer, and the output layer of the deep neural network constructing the popup management model may be: the input layer includes 90 neurons; the abstract layer includes 5 layers. Wherein the first layer comprises 45 neurons, the second layer comprises 30 neurons, the third layer comprises 20 neurons, the fourth layer comprises 10 neurons, and the fifth layer comprises 5 neurons.
  • the activation function of the abstraction layer is a ReLu function
  • the activation function of the output layer is a sigmoid function, that is, the output layer adopts a sigmoid binary classifier
  • the output result is information indicating the popup information that the user is interested in. Or information indicating pop-up information that the user is not interested in.
  • the deep neural network is constructed by using different types of neural networks, for example, the neural network includes: CNN (Convolutional Neural Network), LSTM (LSTM (Long Short-Term Memory), RNN (Simple Recurrent Neural Network), etc., can construct deep neural networks using one or more of them.
  • CNN Convolutional Neural Network
  • LSTM Long Short-Term Memory
  • RNN Simple Recurrent Neural Network
  • S102 Acquire a training set, convert the pop-up information in the training set into a feature vector, and mark the pop-up information that is of interest to the user or the pop-up information that is not of interest to the user for the pop-up information in the training set.
  • the training set includes a large amount of pop-up information and a correspondence indicating whether the pop-up information is information of pop-up information that is of interest to the user.
  • the training set indicates whether the popup information is the information of the popup information that the user is interested in as the tag information corresponding to the popup information.
  • the input layer of the popup management model includes the same number of neurons as the feature vector. For example, if the input layer includes 90 neurons, the dimension of the feature vector is 90.
  • the input layer of the popup management model described above includes the same number of neurons as the feature vector. Similarly, for the deep neural network building the pop-up management model, the number of neurons included in the input layer is the same as the dimension of the feature vector.
  • the pop-up information included in the training set may be pre-configured by the user, or may be acquired from an electronic device having a pop-up function.
  • FIG. 3 is a schematic flowchart of a method for determining a training set according to an embodiment of the present disclosure, where the method includes:
  • S301 Send the obtained popup information to the electronic device.
  • Step S301 is to send the acquired multiple pop-up information to a plurality of electronic devices, so as to ensure that sufficient pop-up information and tag information corresponding to the pop-up information are obtained.
  • an electronic device can receive a pop-up window information, and can also receive multiple pop-up window information.
  • This embodiment of the present application does not limit this.
  • Each electronic device displays the received window information through a pop-up window function.
  • the electronic device records the user viewing the popup information; if the user does not view the popup information, the electronic device records that the user does not view the popup information.
  • the pop-up information viewed by the user can be understood as the pop-up information that the user is interested in.
  • S302 Receive a corresponding relationship between pop-up information fed back by the electronic device and whether the user views the pop-up information.
  • Step S302 is to receive the correspondence between the pop-up information returned by the plurality of electronic devices and whether the user views the pop-up information.
  • the electronic device After recording whether the user views the pop-up information, the electronic device sends the pop-up information and the record of whether the user views the pop-up information to the device that constructs the training set.
  • Step S303 is to determine a training set according to the received correspondence.
  • a training set for different countries may be determined, for example, determining a training set for China according to the pop-up information obtained for China, according to the pop-up window for the United Kingdom The information identifies the training set for the UK. In this way, based on the training sets for different countries, respectively, the pop-up window management model for different countries is obtained, which can more accurately identify whether the received pop-up information is the pop-up information that the user is interested in.
  • the pop-up information for a certain country can be determined by the location of the electronic device displaying the pop-up information, for example, the location of the electronic device displaying the pop-up information in China, and the pop-up information is determined to be pop-up information for China.
  • the pop-up information for a certain country can also be determined by the display language of the pop-up information. For example, if the pop-up information is displayed in Chinese, the pop-up information is determined to be pop-up information for China.
  • the pop-up information for a certain country may be determined by other methods according to actual needs, which is not limited in the embodiment of the present application.
  • model training after acquiring the training set, data mining of the pop-up information in the training set, time and electronic device specifications, and the multi-dimensional feature vector are obtained.
  • the pop-up information in the training set may be converted into a multi-dimensional feature vector according to the display time, the display delay duration, the display location, and the specification of the electronic device used by the user; wherein, the display time is display bullet
  • the duration of the window information can be divided into working time or rest time, or divided into morning time or afternoon time or evening time; the display delay time is the length of time after the pop-up information is displayed, and the delay time can be an electronic display.
  • the device displays the average delay duration of each popup information.
  • the pop-up information is pop-up information for an application.
  • the one-dimensional feature vector of the popup information can also be obtained based on the frequency of the user using the application, and the feature vector obtained according to the display time, the display delay duration, the display location, and the specification of the electronic device used by the user. , constitute a multidimensional feature vector.
  • the user when performing the model training, after acquiring the training set, according to the pop-up information included in the training set and the information indicating whether the pop-up information is the pop-up information that the user is interested in, the user can mark the pop-up information in the training set.
  • the label 1 indicates that the pop-up information is pop-up information that is of interest to the user, and the label 0 indicates that the pop-up information is pop-up information that is not of interest to the user.
  • the pop-up management model is trained by the back propagation algorithm, and the parameters of the pop-up management model are repeatedly adjusted until the correct rate of the output of the pop-up management model reaches the threshold.
  • the parameters of the pop-up management module may be randomly initialized or the initial parameters of the pop-up management module may be set according to experience before training the pop-up management model.
  • the initial parameters of the popup window management mode may be initialized by other means, which is not limited in this application.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • the embodiment of the present application further provides a model training method according to the above model training method.
  • the model training method can include the following steps.
  • a training set is obtained.
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, wherein the label information is information indicating that the popup information is popup information of interest to the user or the popup information indicating that the popup information is not of interest to the user. information.
  • step 02 the pop-up information in the training set is converted into a feature vector, and the tag information corresponding to the pop-up information is marked for the pop-up information in the training set.
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • Step 03 Acquire a preset depth neural network, and initialize a parameter of the deep neural network as a target parameter.
  • the structure of the deep neural network can be referred to the description in step S101.
  • the parameters of the deep neural network constitute a set of parameters, which can be represented by ⁇ i .
  • the parameters of the initialization can be set according to actual needs and experience.
  • the training-related high-level parameters such as the learning rate, the gradient descent algorithm, and the back-propagation algorithm, may be appropriately set, and various manners in the related art may be used, and detailed descriptions are not provided herein.
  • step 01 and step 03 are not limited in the embodiment of the present application.
  • step 04 the feature vector of each popup information included in the training set is input into the deep neural network to obtain an output result of each popup information.
  • the output result of each popup information is information indicating popup information that the popup information is of interest to the user or information indicating that the popup information is not popup information of interest to the user.
  • a first probability and a second probability are obtained.
  • the first probability is a probability that the popup information indicating the input is the information of the popup information that the user is interested in
  • the second probability is a probability that the popup information indicating the input is not the information of the popup information that the user is interested in.
  • the output result corresponding to the popup information is information indicating that the input popup information is popup information that is of interest to the user; otherwise, determining that the output corresponding to the popup information is an indication
  • the input popup information is not information of the popup information that the user is interested in.
  • the current parameter set is ⁇ 1 .
  • the current parameter set ⁇ i is obtained by adjusting the parameter set ⁇ i-1 used last time. description.
  • Step 05 Calculate the pop-up information loss value according to the output result of each pop-up window information and the tag information corresponding to the pop-up window information included in the training set.
  • the Mean Squared Error (MSE) formula can be used as the loss function to obtain the loss value L( ⁇ i ), as shown in the following formula:
  • H represents the number of pop-up information selected from the preset training set in a single training
  • I j represents the feature vector of the j-th pop-up information
  • ⁇ i ) represents information for the j-th pop-up window
  • X j represents the label of the jth popup information
  • i is the count of the number of times the currently executed step 04 is performed.
  • Step 06 According to the pop-up information loss value, determine whether the deep neural network adopting the target parameter converges; if not, perform step 07; if it converges, perform step 08.
  • the convergence may be determined when the value of the loss is less than the threshold value of the preset loss value.
  • the convergence may be determined when the difference between the value of the loss and the value of the previous calculation is less than the preset change threshold. There is no limit here.
  • step 07 the parameters of the deep neural network are adjusted, and the adjusted parameters are used as target parameters, and the process returns to step 04.
  • the back propagation algorithm can be used to adjust the parameters in the current parameter set ⁇ i to obtain an adjusted parameter set.
  • step 08 the deep neural network using the target parameter is used as the popup management model.
  • the current parameter set ⁇ i is taken as the final parameter set ⁇ final of the output, and the deep neural network using the final parameter set ⁇ final is used as the pop-up management model of the training completion.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • the embodiment of the present application provides an information processing method according to the pop-up management model obtained by the above training.
  • FIG. 4 is a schematic diagram of a first process of an information processing method according to an embodiment of the present disclosure, where the method includes the following steps.
  • S401 Acquire the information of the to-be-processed pop-up window sent to the electronic device.
  • Step S401 is to obtain the popup information to be processed.
  • the information about the popup to be processed sent to the electronic device is the information of the popup to be processed to be sent to the target electronic device.
  • the pop-up information to be processed may be determined by intercepting the elasticity information sent by other devices to the electronic device.
  • the server that can obtain the application sends the popup information for the application to the electronic device, and the obtained popup information is used as the popup information to be processed.
  • the electronic device is a mobile phone, and other mobile phones make a call to a mobile phone.
  • an unfamiliar incoming call can be obtained, and the unfamiliar incoming call obtained is used as the information of the pop-up window to be processed.
  • the electronic device is a mobile phone
  • the other mobile phone sends a short message to a mobile phone, and the short message can be obtained at this time, and the obtained short message is used as the popup information to be processed.
  • S402 Input the popup information to be processed into the popup management model.
  • the pop-up window management model is: a model based on the deep neural network for determining whether the input pop-up information is a pop-up information that is of interest to the user, and the pop-up information that the user is interested in is information with a degree of attention greater than a threshold.
  • the pop-up management model described above can be obtained by training in the following manner:
  • the pop-up window management model is constructed; the modeling unit of the pop-up window management model is: whether the pop-up information is of interest to the user;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking the pop-up information of the user in the training set or the label of the pop-up information that the user is not interested in;
  • the pop-up management model is trained using the obtained feature vectors and labels.
  • the pop-up management model can be obtained by training in the following manner:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, wherein the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the feature vector of each popup information included in the training set is input into the depth neural network to obtain an output result of each popup information;
  • the output result of each popup information is popup information indicating that the popup information is of interest to the user.
  • the deep neural network with the target parameters will be used as the pop-up management model.
  • the step of converting popup information in the training set into a feature vector includes:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network includes an input layer, an abstraction layer, and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information of interest to the user is pop-up information viewed by the user.
  • the training set is determined by:
  • the training set is determined according to the received correspondence.
  • step S403 if the output result of the popup management model is information indicating that the popup information to be processed is the popup information that is of interest to the user, the popup information to be processed is sent to the target electronic device.
  • the target electronic device displays the information of the pop-up window to be processed through the pop-up window function.
  • the information of the pop-up window to be processed is pop-up information that is of interest to the user, and displays the information of the pop-up window to be processed, which is convenient for the user to view and improves the user experience.
  • FIG. 5 is a second schematic flowchart of an information processing method according to an embodiment of the present application. Based on FIG. 4, the method may further include:
  • step S404 if the output result of the popup management model is information indicating that the popup information to be processed is popup information that is not of interest to the user, the information of the popup window to be processed is refused to be sent to the target electronic device.
  • the refusal to send the to-be-processed pop-up information to the target electronic device may be: discarding the pending pop-up information to avoid occupying excessive storage space.
  • the refusing to send the to-be-processed pop-up information to the target electronic device may be: intercepting the pending pop-up information, not transmitting the to-be-processed information to the target electronic device, and recording the pending Pop-up information.
  • the prompt information can be periodically sent to the target electronic device to inform the pop-up information that is intercepted, so that the user can timely process the recorded pop-up information.
  • the feature information of the popup window information to be processed may also be recorded, such as an incoming call of a certain number, popup information for an application, and Weather SMS messages, etc.
  • the prompt information may be periodically sent to the electronic device, where the prompt information carries the feature information of the recorded pop-up information, and based on the feature information, the user can determine whether the pop-up information to be processed is intercepted in time.
  • an output result of the popup management model is obtained.
  • the correspondence between the output and the popup information to be processed may be obtained.
  • a pop-up window management model is constructed based on a deep neural network, and the pop-up window management model is used to determine whether the input pop-up information is pop-up information that is of interest to the user; in this case, the acquired pending window is processed.
  • the pop-up window information input pop-up window management model if the output result of the pop-up window management model is displayed as: the pop-up window information to be processed is the pop-up window information of interest to the user, and then the information of the pop-up window to be processed is sent to the electronic device, the electronic device The pop-up window information is displayed through the pop-up function. In this way, the number of pop-up information that is not of interest to the electronic device is effectively reduced, and the problem that the electronic device displays a lot of pop-up information that the user is not interested in is solved, and the user experience is improved.
  • the embodiment of the present application further provides an information processing device and a model training device.
  • FIG. 6 is a schematic structural diagram of a model training apparatus according to an embodiment of the present disclosure, where the apparatus includes:
  • the building unit 601 is configured to construct a pop-up window management model based on the deep neural network;
  • the modeling unit of the pop-up window management model is: information indicating whether the pop-up information is of interest to the user; the pop-up information that the user is interested in is the degree of attention Information greater than the threshold;
  • the converting unit 602 is configured to acquire a training set, convert the pop-up information in the training set into a feature vector, and mark the pop-up information of the user or the pop-up information of the user not interested in the pop-up information in the training set;
  • a training unit 603 is configured to train the pop-up management model using feature vectors and tags.
  • the above construction unit 601 is a construction module
  • the conversion unit is a conversion module
  • the training unit is a training module.
  • the converting unit 602 is specifically configured to:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the model training apparatus may further include: a determining unit, configured to determine a training set;
  • the determining unit may include:
  • a sending subunit configured to send the obtained plurality of popup information to the plurality of electronic devices; so that the plurality of electronic devices display the received popup information through the popup function, and record whether the user views the received popup information;
  • a receiving subunit configured to receive a correspondence between pop-up information returned by the plurality of electronic devices and whether the user views the pop-up information
  • a subunit is added for determining a training set according to the received correspondence.
  • the determining unit is a determining module
  • the sending subunit is a sending submodule
  • the receiving subunit is a receiving submodule
  • the joining subunit is a joining submodule
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • the embodiment of the present application further provides a model training device according to the above model training method embodiment.
  • the device includes:
  • a first acquiring module configured to acquire a training set, where the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, and the label information is information or indicator bullet indicating that the popup information is popup information that is of interest to the user.
  • the window information is not the information of the pop-up information that the user is interested in;
  • a conversion module configured to convert the popup information in the training set into a feature vector, and mark the label information corresponding to the popup information for the popup information in the training set;
  • a second acquiring module configured to acquire a preset depth neural network, and initialize a parameter of the deep neural network as a target parameter
  • An input module configured to input a feature vector of each popup information included in the training set into the depth neural network, to obtain an output result of each popup information; and outputting the information of each popup window to indicate that the popup information is of interest to the user Information of the popup information or information indicating that the popup information is not the popup information of interest to the user;
  • a calculation module configured to calculate a pop-up information loss value according to the output result of each pop-up information and the label information corresponding to the pop-up information included in the training set;
  • a judging module configured to determine whether the deep neural network adopting the target parameter converges according to the pop-up information loss value
  • the processing module is configured to: if the judgment result of the determining module is negative, adjust the parameters of the deep neural network, and use the adjusted parameter as the target parameter; if the judgment result of the determining module is yes, the deep neural network with the target parameter is used as the Pop-up window management model.
  • the conversion module can be specifically used to:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the foregoing model training apparatus may further include: a determining module, configured to determine a training set; and the determining module may include:
  • a sending submodule configured to send the obtained multiple popup information to the plurality of electronic devices; so that the plurality of electronic devices display the received popup information through the popup function, and record whether the user views the received popup information;
  • a receiving submodule configured to receive a correspondence between pop-up information returned by the plurality of electronic devices and whether the user views the pop-up information
  • a sub-module is added for determining a training set according to the received correspondence.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • FIG. 7 is a schematic diagram of a first structure of an information processing apparatus according to an embodiment of the present disclosure, where the apparatus includes:
  • An obtaining unit 701 configured to acquire popup information to be processed
  • the input unit 702 is configured to input the to-be-processed pop-up information into the pop-up management model;
  • the pop-up management model is: a model based on the deep neural network and configured to determine whether the input pop-up information is pop-up information of interest to the user
  • the pop-up information that the user is interested in is information with a degree of attention greater than a threshold;
  • the sending unit 703 is configured to: if the output result of the popup management model is information indicating that the popup information to be processed is popup information that is of interest to the user, send the to-be-processed popup information to the target electronic device, so that the target electronic device passes
  • the pop-up window function displays the information of the pop-up window to be processed.
  • the obtaining unit 701 is an acquiring module
  • the input unit 702 is an input module
  • the sending unit 703 is a sending module.
  • the apparatus may further include:
  • the rejecting unit 704 is configured to: if the output result of the popup management model is information indicating that the popup information to be processed is popup information that is not of interest to the user, refuse to send the to-be-processed popup information to the target electronic device.
  • the rejection unit 704 is a rejection module.
  • the information processing apparatus may further include: a training unit, configured to acquire a popup management model; in this case, the training unit may include:
  • the modeling unit of the popup window management model is: information indicating whether popup information is of interest to the user;
  • a conversion subunit configured to acquire a training set, convert the popup information in the training set into a feature vector, and mark the popup information of the user or the popup information of the user not interested in the popup information in the training set;
  • a training subunit for training the pop-up management model using feature vectors and labels is
  • the training unit is a training module
  • the construction sub-unit is a construction sub-module
  • the conversion sub-unit is a conversion sub-module
  • the training sub-unit is a training sub-module.
  • the information processing apparatus may further include: a training module, configured to obtain a popup management model; in this case, the training module may include:
  • a first acquiring submodule configured to acquire a training set, where the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, where the label information is information or an indication that the popup information is popup information that is of interest to the user.
  • the pop-up information is not the information of the pop-up information that the user is interested in;
  • a conversion submodule configured to convert the popup information in the training set into a feature vector, and mark the label information corresponding to the popup information for the popup information in the training set;
  • a second obtaining submodule configured to acquire a preset depth neural network, and initialize a parameter of the deep neural network as a target parameter
  • the input sub-module is configured to input the feature vector of each pop-up information included in the training set into the deep neural network to obtain an output result of each pop-up window information; the output result of each pop-up window information is to indicate the pop-up window information is a user sense The information of the pop-up information of interest or the information indicating that the pop-up information is not the pop-up information of interest to the user;
  • a calculation submodule configured to calculate a popup information loss value according to the output result of each popup information and the label information corresponding to the popup information included in the training set;
  • a determining sub-module configured to determine whether the deep neural network using the target parameter converges according to the pop-up information loss value
  • the processing submodule is configured to: if the judgment result of the submodule is negative, adjust the parameters of the deep neural network, and use the adjusted parameter as the target parameter; if the judgment result of the submodule is yes, the depth of the target parameter is adopted.
  • the neural network acts as a pop-up management model.
  • the conversion subunit can be specifically used to:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the apparatus may further include:
  • the adding unit 905 is configured to add the correspondence between the output result of the popup management model and the popup information to be processed to the training set after inputting the popup information to be processed into the popup management model.
  • the above-mentioned joining unit is an adding module.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the information processing apparatus may further include: a determining unit, configured to determine a training set; and the determining unit may include:
  • a sending subunit configured to send the obtained plurality of popup information to the plurality of electronic devices; so that the plurality of electronic devices respectively display the received popup information through the popup function, and record whether the user views the received popup information;
  • a receiving subunit configured to receive a correspondence between pop-up information returned by the plurality of electronic devices and whether the user views the pop-up information
  • a subunit is added for determining a training set according to the received correspondence.
  • the determining unit is a determining module
  • the sending subunit is a sending submodule
  • the receiving subunit is a receiving submodule
  • the joining subunit is a joining submodule
  • a pop-up window management model is constructed based on a deep neural network, and the pop-up window management model is used to determine whether the input pop-up information is pop-up information that is of interest to the user; in this case, the acquired pending window is processed.
  • the pop-up window information input pop-up window management model if the output result of the pop-up window management model is displayed as: the pop-up window information to be processed is the pop-up window information of interest to the user, and then the information of the pop-up window to be processed is sent to the electronic device, the electronic device The pop-up window information is displayed through the pop-up function. In this way, the number of pop-up information that is not of interest to the electronic device is effectively reduced, and the problem that the electronic device displays a lot of pop-up information that the user is not interested in is solved, and the user experience is improved.
  • the embodiment of the present application further provides an electronic device, as shown in FIG. 10, including a processor 1001, a communication interface 1002, a memory 1003, and a communication bus 1004, wherein the processor 1001 and the communication interface 1002.
  • the memory 1003 completes communication with each other through the communication bus 1004.
  • the processor 1001 is configured to implement a model training method when executing a program stored on the memory 1003.
  • the model training methods include:
  • the pop-up window management model is constructed based on the deep neural network; the modeling unit of the pop-up window management model is: information indicating whether the pop-up information is of interest to the user; the pop-up information that the user is interested in is information with a degree of attention greater than a threshold;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking the pop-up information of the user in the training set or the label of the pop-up information that the user is not interested in;
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • the embodiment of the present application further provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through the communication bus;
  • a memory for storing a computer program
  • a processor is used to implement a model training method when executing a program stored on a memory.
  • the model training methods include:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, wherein the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the feature vector of each popup information included in the training set is input into the depth neural network to obtain an output result of each popup information;
  • the output result of each popup information is popup information indicating that the popup information is of interest to the user.
  • the deep neural network with the target parameters will be used as the pop-up management model.
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • the embodiment of the present application further provides an electronic device, as shown in FIG. 11, including a processor 1101, a communication interface 1102, a memory 1103, and a communication bus 1104, wherein the processor 1101 and the communication interface 1102.
  • the memory 1103 completes communication with each other through the communication bus 1104.
  • the processor 1101 is configured to implement an information processing method when executing a program stored in the memory 1103.
  • Information processing methods include:
  • the information of the popup window to be processed is input into the popup window management model;
  • the popup window management model is: a model constructed based on the deep neural network for determining whether the input popup information is the popup information of interest to the user;
  • the window information is information whose degree of attention is greater than a threshold;
  • the popup information to be processed is sent to the target electronic device, so that the target electronic device displays the pending function through the popup function. Pop-up information.
  • the information processing method may further include:
  • the output result of the popup management model is information indicating that the popup information to be processed is popup information that is not of interest to the user, the information of the popup window to be processed is refused to be sent to the target electronic device.
  • the pop-up management model can be trained in the following ways:
  • the pop-up window management model is constructed; the modeling unit of the pop-up window management model is: information indicating whether the pop-up information is of interest to the user;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking the pop-up information of the user in the training set or the label of the pop-up information that the user is not interested in;
  • the pop-up management model can be trained in the following ways:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, wherein the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the feature vector of each popup information included in the training set is input into the depth neural network to obtain an output result of each popup information;
  • the output result of each popup information is popup information indicating that the popup information is of interest to the user.
  • the deep neural network with the target parameters will be used as the pop-up management model.
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the method further includes:
  • the corresponding relationship between the output of the popup management model and the popup information to be processed is added to the training set.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • a pop-up window management model is constructed based on a deep neural network, and the pop-up window management model is used to determine whether the input pop-up information is pop-up information that is of interest to the user; in this case, the acquired pending window is processed.
  • the pop-up information input pop-up window management model if the output result of the pop-up window management model is displayed as: the pop-up window information to be processed is the pop-up window information of interest to the user, and the pop-up window information to be processed is sent to the electronic device, the electronic device The pop-up window information is displayed through the pop-up function. In this way, the number of pop-up information that is not of interest to the electronic device is effectively reduced, and the problem that the electronic device displays a lot of pop-up information that the user is not interested in is solved, and the user experience is improved.
  • the communication bus may be a PCI (Peripheral Component Interconnect) bus or an EISA (Extended Industry Standard Architecture) bus.
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like.
  • the above communication interface is used for communication between the above electronic device and other devices.
  • the above memory may include a RAM (Random Access Memory), and may also include NVM (Non-Volatile Memory), such as at least one disk storage.
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • the processor may be a general-purpose processor, including a CPU (Central Processing Unit), an NP (Network Processor), or the like; or a DSP (Digital Signal Processing) or an ASIC (Application) Specific Integrated Circuit, FPGA (Field-Programmable Gate Array) or his programmable logic device, discrete gate or transistor logic device, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application) Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • his programmable logic device discrete gate or transistor logic device, discrete hardware components.
  • Model training methods include:
  • the pop-up window management model is constructed based on the deep neural network; the modeling unit of the pop-up window management model is: information indicating whether the pop-up information is of interest to the user; the pop-up information that the user is interested in is information with a degree of attention greater than a threshold;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking the pop-up information of the user in the training set or the label of the pop-up information that the user is not interested in;
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • Model training methods include:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, wherein the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the feature vector of each popup information included in the training set is input into the depth neural network to obtain an output result of each popup information;
  • the output result of each popup information is popup information indicating that the popup information is of interest to the user.
  • the deep neural network with the target parameters will be used as the pop-up management model.
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • the embodiment of the present application further provides a storage medium, where the computer program is stored in the storage medium, and when the computer program is executed by the processor, the information processing method is implemented.
  • Information processing methods include:
  • the information of the popup window to be processed is input into the popup window management model;
  • the popup window management model is: a model constructed based on the deep neural network for determining whether the input popup information is the popup information of interest to the user;
  • the window information is information whose degree of attention is greater than a threshold;
  • the popup information to be processed is sent to the target electronic device, so that the target electronic device displays the pending function through the popup function. Pop-up information.
  • the foregoing information processing method may further include:
  • the output result of the popup management model is information indicating that the popup information to be processed is popup information that is not of interest to the user, the information of the popup window to be processed is refused to be sent to the target electronic device.
  • the pop-up management model can be trained in the following ways:
  • the pop-up window management model is constructed; the modeling unit of the pop-up window management model is: information indicating whether the pop-up information is of interest to the user;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking the pop-up information of the user in the training set or the label of the pop-up information that the user is not interested in;
  • the pop-up management model can be trained in the following ways:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, wherein the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the feature vector of each popup information included in the training set is input into the depth neural network to obtain an output result of each popup information;
  • the output result of each popup information is popup information indicating that the popup information is of interest to the user.
  • the deep neural network with the target parameters will be used as the pop-up management model.
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the method further includes:
  • the corresponding relationship between the output of the popup management model and the popup information to be processed is added to the training set.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • a pop-up window management model is constructed based on a deep neural network, and the pop-up window management model is used to determine whether the input pop-up information is pop-up information that is of interest to the user; in this case, the acquired pending window is processed.
  • the pop-up window information input pop-up window management model if the output result of the pop-up window management model is displayed as: the pop-up window information to be processed is the pop-up window information of interest to the user, and then the information of the pop-up window to be processed is sent to the electronic device, the electronic device The pop-up window information is displayed through the pop-up function. In this way, the number of pop-up information that is not of interest to the electronic device is effectively reduced, and the problem that the electronic device displays a lot of pop-up information that the user is not interested in is solved, and the user experience is improved.
  • Model training methods include:
  • the pop-up window management model is constructed based on the deep neural network; the modeling unit of the pop-up window management model is: information indicating whether the pop-up information is of interest to the user; the pop-up information that the user is interested in is information with a degree of attention greater than a threshold;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking the pop-up information of the user in the training set or the label of the pop-up information that the user is not interested in;
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • Model training methods include:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, wherein the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the feature vector of each popup information included in the training set is input into the depth neural network to obtain an output result of each popup information;
  • the output result of each popup information is popup information indicating that the popup information is of interest to the user.
  • the deep neural network with the target parameters will be used as the pop-up management model.
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • the training pop-up management model and then according to the pop-up management model obtained by the training, can more accurately identify whether the pop-up information is the pop-up information that the user is interested in.
  • the embodiment of the present application further provides a computer program, and the information processing method is implemented when the computer program is executed by the processor.
  • Information processing methods include:
  • the information of the popup window to be processed is input into the popup window management model;
  • the popup window management model is: a model constructed based on the deep neural network for determining whether the input popup information is the popup information of interest to the user;
  • the window information is information whose degree of attention is greater than a threshold;
  • the popup information to be processed is sent to the target electronic device, so that the target electronic device displays the pending function through the popup function. Pop-up information.
  • the foregoing information processing method may further include:
  • the output result of the popup management model is information indicating that the popup information to be processed is popup information that is not of interest to the user, the information of the popup window to be processed is refused to be sent to the target electronic device.
  • the pop-up management model can be trained in the following ways:
  • the pop-up window management model is constructed; the modeling unit of the pop-up window management model is: information indicating whether the pop-up information is of interest to the user;
  • Obtaining a training set converting the pop-up information in the training set into a feature vector, marking the pop-up information of the user in the training set or the label of the pop-up information that the user is not interested in;
  • the pop-up management model can be trained in the following ways:
  • the training set includes a plurality of popup information and label information corresponding to the plurality of popup information, wherein the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the label information is information indicating that the popup information is popup information that is of interest to the user or that the popup information is not of interest to the user.
  • the feature vector of each popup information included in the training set is input into the depth neural network to obtain an output result of each popup information;
  • the output result of each popup information is popup information indicating that the popup information is of interest to the user.
  • the deep neural network with the target parameters will be used as the pop-up management model.
  • the step of converting the popup information in the training set into a feature vector may include:
  • the pop-up information in the training set is converted into a feature vector according to the display time, the display delay duration, the display location, and the specifications of the electronic device used by the user.
  • the deep neural network comprises an input layer, an abstraction layer and an output layer;
  • the input layer of the deep neural network includes the same number of neurons as the feature vector; the activation function of the abstraction layer is a ReLu function; and the activation function of the output layer is a sigmoid function.
  • the method further includes:
  • the corresponding relationship between the output of the popup management model and the popup information to be processed is added to the training set.
  • the pop-up information that the user is interested in is the pop-up information viewed by the user.
  • the training set can be determined by:
  • the training set is determined according to the received correspondence.
  • a pop-up window management model is constructed based on a deep neural network, and the pop-up window management model is used to determine whether the input pop-up information is pop-up information that is of interest to the user; in this case, the acquired pending window is processed.
  • the pop-up window information input pop-up window management model if the output result of the pop-up window management model is displayed as: the pop-up window information to be processed is the pop-up window information of interest to the user, and then the information of the pop-up window to be processed is sent to the electronic device, the electronic device The pop-up window information is displayed through the pop-up function. In this way, the number of pop-up information that is not of interest to the electronic device is effectively reduced, and the problem that the electronic device displays a lot of pop-up information that the user is not interested in is solved, and the user experience is improved.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present application may be embodied in the form of a software product in the form of a software product, or a part of the technical solution, which is stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM), a random access memory (hereinafter referred to as RAM), a magnetic disk, or an optical disk.

Abstract

本申请实施例提供了一种信息处理和模型训练方法、装置、电子设备、存储介质,方法包括:获取发送给电子设备的待处理弹窗信息;将待处理弹窗信息输入弹窗管理模型;弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型,用户感兴趣的弹窗信息为关注度大于阈值的信息;若弹窗管理模型的输出结果为待处理弹窗信息为用户感兴趣的弹窗信息,将待处理弹窗信息发送给电子设备,以使电子设备通过弹窗功能显示待处理弹窗信息。应用本申请实施例,解决了电子设备显示大量用户不感兴趣的弹窗信息的问题,提高了用户体验。

Description

信息处理和模型训练方法、装置、电子设备、存储介质
本申请要求于2017年6月30日提交中国专利局、申请号为201710525652.0发明名称为“信息处理和模型训练方法、装置、电子设备、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及互联网技术领域,特别是涉及一种信息处理和模型训练方法、装置、电子设备、存储介质。
背景技术
目前,为了便于用户及时获取并查看感兴趣的信息,在电子设备,如智能手机、平板电脑、笔记本等上设置了弹窗功能。当电子设备接收到信息后,通过弹窗功能显示该信息,此时,可以将需要通过弹窗功能显示的信息称为弹窗信息。
随着科技的发展,网络中的弹窗信息越来越丰富,这些弹窗信息中加杂着大量的用户不感兴趣的弹窗信息,若将这些弹窗信息都发送给电子设备,这直接导致的结果为:电子设备通过弹窗功能显示大量用户不感兴趣的弹窗信息,影响用户对电子设备的正常使用,用户体验不佳。
发明内容
本申请实施例的目的在于提供一种信息处理和模型训练方法、装置、电子设备、存储介质,以解决电子设备显示大量用户不感兴趣的弹窗信息的问题。具体技术方案如下:
第一方面,本申请实施例公开了一种信息处理方法,所述方法包括:
获取待处理弹窗信息;
将所述待处理弹窗信息输入弹窗管理模型;所述弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型;所述用户感兴趣的弹窗信息为关注度大于阈值的信息;
若所述弹窗管理模型的输出结果为指示所述待处理弹窗信息为用户感兴趣的弹窗信息的信息,将所述待处理弹窗信息发送给目标电子设备,以使所 述目标电子设备通过弹窗功能显示所述待处理弹窗信息。
可选的,所述方法还包括:
若所述弹窗管理模型的输出结果为指示所述待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将所述待处理弹窗信息发送给所述目标电子设备。
可选的,所述弹窗管理模型通过以下方式训练获得:
基于深度神经网络,构建弹窗管理模型;所述弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;
获取训练集,将所述训练集中的弹窗信息转换为特征向量,为所述训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用所述特征向量和所述标签,训练所述弹窗管理模型。
可选的,所述弹窗管理模型通过以下方式训练获得:
获取训练集,所述训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,所述标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将所述训练集中的弹窗信息转换为特征向量,并为所述训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化所述深度神经网络的参数作为目标参数;
将所述训练集包括的每个弹窗信息的特征向量输入所述深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和所述训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据所述弹窗信息损失值,判断采用所述目标参数的深度神经网络是否收敛;
若不收敛,则调整所述深度神经网络的参数,将调整后的参数作为目标参数,返回执行所述将所述训练集包括的每个弹窗信息输入所述深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用所述目标参数的深度神经网络作为弹窗管理模型。
可选的,所述将所述训练集中的弹窗信息转换为特征向量的步骤,包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,所述深度神经网络包括输入层、抽象层和输出层;
其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为ReLu(Rectified Linear Units,修正线性单元)函数;所述输出层的激活函数为sigmoid(S型)函数。
可选的,在所述将所述待处理弹窗信息输入弹窗管理模型的步骤之后,所述方法还包括:
将所述弹窗管理模型的输出结果与所述待处理弹窗信息的对应关系加入所述训练集。
可选的,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,所述训练集通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
第二方面,本申请实施例公开了一种模型训练方法,所述方法包括:
基于深度神经网络,构建弹窗管理模型;所述弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;所述用户感兴趣的弹窗信息为关注度大于阈值的信息;
获取训练集,将所述训练集中的弹窗信息转换为特征向量,为所述训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用所述特征向量和所述标签,训练所述弹窗管理模型。
可选的,所述将所述训练集中的弹窗信息转换为特征向量的步骤,包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,所述深度神经网络包括输入层、抽象层和输出层;
其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为ReLu函数;所述输出层的激活函数为sigmoid函数。
可选的,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,所述训练集通过以下方式确定:
将获取的多个弹窗信息发送多个给电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收所述多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
第三方面,本申请实施例公开了一种模型训练方法,所述方法包括:
获取训练集,所述训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,所述标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将所述训练集中的弹窗信息转换为特征向量,并为所述训练集中的弹窗 信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化所述深度神经网络的参数作为目标参数;
将所述训练集包括的每个弹窗信息的特征向量输入所述深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和所述训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据所述弹窗信息损失值,判断采用所述目标参数的深度神经网络是否收敛;
若不收敛,则调整所述深度神经网络的参数,将调整后的参数作为目标参数,返回执行所述将所述训练集包括的每个弹窗信息输入所述深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用所述目标参数的深度神经网络作为弹窗管理模型。
可选的,所述将所述训练集中的弹窗信息转换为特征向量的步骤,包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,所述深度神经网络包括输入层、抽象层和输出层;
其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为ReLu函数;所述输出层的激活函数为sigmoid函数。
可选的,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,所述训练集通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
第四方面,本申请实施例公开了一种信息处理装置,所述装置包括:
获取模块,用于获取待处理弹窗信息;
输入模块,用于将所述待处理弹窗信息输入弹窗管理模型;所述弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型;所述用户感兴趣的弹窗信息为关注度大于阈值的信息;
发送模块,用于若所述弹窗管理模型的输出结果为指示所述待处理弹窗信息为用户感兴趣的弹窗信息的信息,将所述待处理弹窗信息发送给目标电子设备,以使所述目标电子设备通过弹窗功能显示所述待处理弹窗信息。
可选的,所述装置还包括:
拒绝模块,用于若所述弹窗管理模型的输出结果为指示所述待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将所述待处理弹窗信息发送给所述目标电子设备。
可选的,所述装置还包括:训练模块,用于训练获得所述弹窗管理模型;所述训练模块包括:
构建子模块,用于基于深度神经网络,构建弹窗管理模型;所述弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;
转换子模块,用于获取训练集,将所述训练集中的弹窗信息转换为特征向量,为所述训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
训练子模块,用于使用所述特征向量和所述标签,训练所述弹窗管理模型。
可选的,所述装置还包括:训练模块,用于训练获得所述弹窗管理模型; 所述训练模块包括:
第一获取子模块,用于获取训练集,所述训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,所述标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
转换子模块,用于将所述训练集中的弹窗信息转换为特征向量,并为所述训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
第二获取子模块,用于获取预设的深度神经网络,初始化所述深度神经网络的参数作为目标参数;
输入子模块,用于将所述训练集包括的每个弹窗信息的特征向量输入所述深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
计算子模块,用于根据每个弹窗信息的输出结果和所述训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
判断子模块,用于根据所述弹窗信息损失值,判断采用所述目标参数的深度神经网络是否收敛;
处理子模块,用于若所述判断子模块的判断结果为否,则调整所述深度神经网络的参数,将调整后的参数作为目标参数;若所述判断子模块的判断结果为是,则将采用所述目标参数的深度神经网络作为弹窗管理模型。
可选的,所述转换子模块,具体用于:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,所述深度神经网络包括输入层、抽象层和输出层;
其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为ReLu函数;所述输出层的激活函数为sigmoid函数。
可选的,所述装置还包括:
加入模块,用于在将所述待处理弹窗信息输入弹窗管理模型之后,将所述弹窗管理模型的输出结果与所述待处理弹窗信息的对应关系加入所述训练集。
可选的,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,所述装置还包括:确定模块,用于确定训练集;所述确定模块包括:
发送子模块,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收子模块,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
加入子模块,用于根据接收的对应关系确定训练集。
第五方面,本申请实施例公开了一种模型训练装置,所述装置包括:
构建模块,用于基于深度神经网络,构建弹窗管理模型;所述弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;所述用户感兴趣的弹窗信息为关注度大于阈值的信息;
转换模块,用于获取训练集,将所述训练集中的弹窗信息转换为特征向量,为所述训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
训练模块,用于使用所述特征向量和所述标签,训练所述弹窗管理模型。
可选的,所述转换模块,具体用于:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,所述深度神经网络包括输入层、抽象层和输出层;
其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的 维数相同;所述抽象层的激活函数为ReLu函数;所述输出层的激活函数为sigmoid函数。
可选的,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,所述装置还包括:确定模块,用于确定训练集;所述确定模块包括:
发送子模块,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收子模块,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
加入子模块,用于根据接收的对应关系确定训练集。
第六方面,本申请实施例公开了一种模型训练装置,所述装置包括:
第一获取模块,用于获取训练集,所述训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,所述标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
转换模块,用于将所述训练集中的弹窗信息转换为特征向量,并为所述训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
第二获取模块,用于获取预设的深度神经网络,初始化所述深度神经网络的参数作为目标参数;
输入模块,用于将所述训练集包括的每个弹窗信息的特征向量输入所述深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
计算模块,用于根据每个弹窗信息的输出结果和所述训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
判断模块,用于根据所述弹窗信息损失值,判断采用所述目标参数的深 度神经网络是否收敛;
处理模块,用于若所述判断模块的判断结果为否,则调整所述深度神经网络的参数,将调整后的参数作为目标参数;若所述判断模块的判断结果为是,则将采用所述目标参数的深度神经网络作为弹窗管理模型。
可选的,所述转换模块,具体用于:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,所述深度神经网络包括输入层、抽象层和输出层;
其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为ReLu函数;所述输出层的激活函数为sigmoid函数。
可选的,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,所述装置还包括:确定模块,用于确定训练集;所述确定模块包括:
发送子模块,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收子模块,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
加入子模块,用于根据接收的对应关系确定训练集。
第七方面,本申请实施例公开了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,所述处理器、所述通信接口、所述存储器通过所述通信总线完成相互间的通信;
所述存储器,用于存放计算机程序;
所述处理器,用于执行所述存储器上所存放的程序,实现上述第一方面公开的任一信息处理方法步骤。
第八方面,本申请实施例公开了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,所述处理器、所述通信接口、所述存储器通过所述通信总线完成相互间的通信;
所述存储器,用于存放计算机程序;
所述处理器,用于执行所述存储器上所存放的程序,实现上述第二方面公开的任一模型训练方法步骤。
第九方面,本申请实施例公开了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,所述处理器、所述通信接口、所述存储器通过所述通信总线完成相互间的通信;
所述存储器,用于存放计算机程序;
所述处理器,用于执行所述存储器上所存放的程序,实现上述第三方面公开的任一模型训练方法步骤。
第十方面,本申请实施例公开了一种存储介质,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面公开的任一信息处理方法步骤。
第十一方面,本申请实施例公开了一种存储介质,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述第二方面公开的任一模型训练方法步骤。
第十二方面,本申请实施例公开了一种存储介质,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述第三方面公开的任一模型训练方法步骤。
第十三方面,本申请实施例公开了一种计算机程序,所述计算机程序被处理器执行时实现上述上述第一方面公开的任一信息处理方法步骤。
第十四方面,本申请实施例公开了一种计算机程序,所述计算机程序被处理器执行时实现上述上述第二方面公开的任一模型训练方法步骤。
第十五方面,本申请实施例公开了一种计算机程序,所述计算机程序被处理器执行时实现上述上述第三方面公开的任一模型训练方法步骤。
本申请实施例中,基于深度神经网络构建弹窗管理模型,该弹窗管理模型用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息;这种情况下,将获取到的待处理弹窗信息输入弹窗管理模型中,若该弹窗管理模型的输出结果显示为:待处理弹窗信息为用户感兴趣的弹窗信息,再将待处理弹窗信息发送给电子设备,电子设备通过弹窗功能显示待处理弹窗信息。这样,有效地减少了电子设备接收到的用户不感兴趣的弹窗信息的数量,解决了电子设备显示大量用户不感兴趣的弹窗信息的问题,提高了用户体验。当然,实施本申请的任一产品或方法必不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本申请实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的模型训练方法的一种流程示意图;
图2为本申请实施例中使用的弹窗管理模型的一种示意图;
图3为本申请实施例提供的训练集确定方法的一种流程示意图;
图4为本申请实施例提供的信息处理方法的第一种流程示意图;
图5为本申请实施例提供的信息处理方法的第二种流程示意图;
图6为本申请实施例提供的模型训练装置的一种结构示意图;
图7为本申请实施例提供的信息处理装置的第一种结构示意图;
图8为本申请实施例提供的信息处理装置的第二种结构示意图;
图9为本申请实施例提供的信息处理装置的第三种结构示意图;
图10为本申请实施例提供的电子设备的第一种结构示意图;
图11为本申请实施例提供的电子设备的第二种结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行 清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为便于理解,下面对本申请实施例中出现的词语进行解释。
弹窗信息:电子设备通过弹窗功能显示的信息。例如,运行该应用程序的服务器发送的推送信息、其他电子设备向一个电子设备的来电、或其他电子设备向一个电子设备发送的短信消息等。
用户感兴趣的弹窗信息:与用户的行为习惯相关的弹窗信息,为关注度大于阈值的信息。在本申请实施例中,关注度可以用点击频率来确定。例如,用户经常点击浏览购物的网页,点击频率大于了阈值,则可以将购物相关的弹窗信息确定为用户感兴趣的弹窗信息;再例如,用户点击查看某一应用程序的推送信息的频率大于了阈值,则可以将该应用程序的推送信息确定为用户感兴趣的弹窗信息;再例如,用户接听了某一陌生来电的频率大于了阈值,则可以将该来电确定为用户感兴趣的弹窗信息等。
目前,网络中的存在了大量用户感兴趣或不感兴趣的弹窗信息,若将这些弹窗信息都发送给电子设备,这直接导致的结果为:电子设备通过弹窗功能显示大量用户不感兴趣的弹窗信息,影响用户对电子设备的正常使用,用户体验不佳。
为了解决电子设备显示大量用户不感兴趣的弹窗信息的问题,提高用户体验,本申请实施例提供了一种信息处理和模型训练方法、装置、电子设备、存储介质。该信息处理和模型训练方法、装置可以应用于云端服务器。
参考图1,图1为本申请实施例提供的模型训练方法的一种流程示意图,该方法包括如下步骤。
S101:基于深度神经网络,构建弹窗管理模型。
其中,弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息,即,指示用户感兴趣的弹窗信息的信息和指示用户不感兴趣的弹窗信息的信息;这里,用户感兴趣的弹窗信息为关注度大于阈值的信息。例如,阈值为0.8,若一个弹窗信息的关注度为0.9,0.9>0.8,则确定该弹窗信息为 用户感兴趣的弹窗信息。
建模单元为弹窗管理模型对输入的弹窗信息进行处理后,得到的输出结果的类型信息。基于深度神经网络构建初始的弹窗管理模型。
在本申请的一个实施例中,用户感兴趣的弹窗信息为关注度为1的信息,对于用户感兴趣的弹窗信息,电子设备在通过弹窗功能显示后,用户会查看该弹窗信息,即用户感兴趣的弹窗信息为用户查看的弹窗信息;用户不感兴趣的弹窗信息为关注度为0的信息,对于用户不感兴趣的弹窗信息,电子设备在通过弹窗功能显示后,用户不会查看该弹窗信息,即用户不感兴趣的弹窗信息为用户未查看的弹窗信息。
在本申请的一个实施例中,电子设备在通过弹窗功能显示弹窗信息后,用户可以通过点击弹窗信息的方式查看该弹窗信息,即用户感兴趣的弹窗信息为用户点击的弹窗信息。另外,用户还可以通过其他方式查看弹窗信息,本申请实施例对此进行限定。
深度神经网络是由多个神经元组成的,属于前向式神经网络的一种。
构建弹窗管理模型的深度神经网络由输入层、抽象层和输出层组成,如图2所示,每一层都有神经元的输入和输出,其中,神经元的输入为前一层神经元的输出。其中,抽象层用于对输入的信息的特征向量进行解析处理。
在本申请的一个实施例中,构建弹窗管理模型的深度神经网络的输入层、抽象层和输出层部署的神经元的个数可以为:输入层包括90个神经元;抽象层包括5层,其中,第一层包括45个神经元,第二层包括30个神经元,第三层包括20个神经元,第四层包括10个神经元,第五层包括5个神经元。
在本申请的一个实施例中,抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数,即输出层采用sigmoid二元分类器,输出结果为指示用户感兴趣的弹窗信息的信息或指示用户不感兴趣的弹窗信息的信息。
在本申请的一个实施例中,为了保证弹窗管理模型输出结果的准确性,深度神经网络采用不同类型的神经网络进行构建,例如,神经网络包括:CNN(Convolutional Neural Network卷积神经网络)、LSTM(LSTM(Long Short-Term Memory,长短时记忆网络)、RNN(Simple Recurrent Neural Network, 循环神经网络)等,可以采用其中的一种或多种构建深度神经网络。
S102:获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签。
其中,训练集中包括大量的弹窗信息与指示弹窗信息是否为用户感兴趣的弹窗信息的信息的对应关系。训练集中指示弹窗信息是否为用户感兴趣的弹窗信息的信息作为弹窗信息对应的标签信息。
本申请实施例中,弹窗管理模型的输入层包括的神经元个数与特征向量的维数相同,例如输入层包括90个神经元,则特征向量的维数为90。
上述弹窗管理模型的输入层包括的神经元个数与特征向量的维数相同。同理,针对构建弹窗管理模型的深度神经网络,输入层包括的神经元个数与特征向量的维数相同。
在本申请的一个实施例中,训练集中包括的弹窗信息可以为用户预先配置的,也可以为从具有弹窗功能的电子设备中获取的。
参考图3,图3为本申请实施例提供的训练集确定方法的一种流程示意图,该方法包括:
S301:将获取的弹窗信息发送给电子设备。
步骤S301即为将获取的多个弹窗信息发送给多个电子设备,以保证获取到足够多的弹窗信息和弹窗信息对应的标签信息。
其中,一个电子设备可以接收一个弹窗信息,也可以接收多个弹窗信息。本申请实施例对此不进行限定。
各个电子设备分别通过弹窗功能显示接收的窗信息。另外,若用户查看了该弹窗信息,则电子设备记录下用户查看弹窗信息;若用户未查看该弹窗信息,则电子设备记录下用户未查看弹窗信息。这里,用户查看的弹窗信息可以理解为用户感兴趣的弹窗信息。
S302:接收电子设备反馈的弹窗信息与用户是否查看弹窗信息的对应关系。
步骤S302即为接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系。
电子设备在记录用户是否查看弹窗信息后,将弹窗信息与用户是否查看该弹窗信息的记录发送给构建训练集的设备。
S303:将接收的对应关系加入训练集。
步骤S303即为根据接收的对应关系确定训练集。
网络中存在着大量的弹窗信息,基于这些弹窗信息,可以很快的获取到大量的弹窗信息与用户是否查看该弹窗信息的对应关系,基于包括了大量的弹窗信息与用户是否查看该弹窗信息的对应关系,构成训练集,以用于后续弹窗管理模型的训练。
本申请的一个实施例中,在确定训练集时,可以确定出针对不同国家的训练集,例如,根据获取到针对中国的弹窗信息确定针对中国的训练集,根据获取到针对英国的弹窗信息确定针对英国的训练集。这样基于针对不同国家的训练集,分别训练获得针对不同国家的弹窗管理模型,能够更为准确的识别出接收到的弹窗信息是否为用户感兴趣的弹窗信息。
这里,针对某一国家的弹窗信息可以通过显示弹窗信息的电子设备的所在地确定,例如显示弹窗信息的电子设备的所在地在中国,则确定该弹窗信息为针对中国的弹窗信息。针对某一国家的弹窗信息还可以通过弹窗信息的显示语种确定,例如弹窗信息是以汉语显示的,则确定该弹窗信息为针对中国的弹窗信息。根据实际需要,还可以通过其他方式确定针对某一国家的弹窗信息,本申请实施例中对此不进行限定。
在进行模型训练时,获取到训练集后,对训练集中的弹窗信息进行时间、电子设备的规格等方面的数据挖掘,获取到多维的特征向量。
在本申请的一个实施例中,可以根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为多维特征向量;其中,显示时间为显示弹窗信息的时间时长,可以分为工作时间或休息时间,或分为上午时间或下午时间或晚上时间等;显示延迟时长为显示弹窗信息后延迟查看的时长,这里显示延迟时长可以为一个电子设备显示各个 弹窗信息的延迟平均时长。
在本申请的一个实施例中,弹窗信息是针对某一应用程序的弹窗信息。这种情况下,还可以基于用户使用应用程序的频率获得弹窗信息的一维的特征向量,再结合根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格获得的特征向量,构成一个多维的特征向量。
另外,在进行模型训练时,获取到训练集后,根据训练集中包括的弹窗信息和指示该弹窗信息是否为用户感兴趣的弹窗信息的信息,可以为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签。例如,若训练集中记录一弹窗信息为用户查看的弹窗信息,则将该弹窗信息标记为1,若训练集中记录一弹窗信息为用户未查看的弹窗信息,则将该弹窗信息标记为0。其中,标签1表示该弹窗信息为用户感兴趣的弹窗信息,标签0表示该弹窗信息为用户不感兴趣的弹窗信息。
S103:使用特征向量和标签,训练弹窗管理模型。
使用获得的多维特征向量和标签,通过反向传播算法训练弹窗管理模型,反复调整弹窗管理模型的参数,直至弹窗管理模型输出结果的正确率达到阈值为止。
在本申请的一个实施例,为了加快训练弹窗管理模型的速度,在对弹窗管理模型进行训练前,可以随机初始化弹窗管理模的参数,或根据经验设置弹窗管理模的初始参数。另外,还可以通过其他方式初始弹窗管理模的初始参数,本申请对此不进行限定。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
基于相同的发明构思,根据上述模型训练方法,本申请实施例还提供了一种模型训练方法。该模型训练方法可包括如下步骤。
步骤01,获取训练集。训练集包括多个弹窗信息和多个弹窗信息对应的 标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息。
为了保证训练获得的弹窗管理模型准确可靠,训练集中包括的弹窗信息和弹窗信息对应的标签信息越多越好。
步骤02,将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签。
例如,根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
步骤03,获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数。
其中,深度神经网络的结构可参考步骤S101中的描述。深度神经网络的参数构成一个参数集,可以由θ i表示。为了加快深度神经网络的训练,初始化的参数可以根据实际需要和经验进行设置。
本步骤中,还可以对训练相关的高层参数如学习率、梯度下降算法、反向传播算法等进行合理的设置,具体可以采用相关技术中的各种方式,在此不再进行详细描述。
本申请实施例中不限定步骤01与步骤03的执行顺序。
步骤04,将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果。每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息。
例如,将一弹窗信息的特征向量输入预设的深度神经网络进行处理的过程中,得到第一概率和第二概率。其中,第一概率为指示输入的弹窗信息为用户感兴趣的弹窗信息的信息的概率,第二概率为指示输入的弹窗信息不是用户感兴趣的弹窗信息的信息的概率。
若第一概率大于第二概率,则确定该弹窗信息对应的输出结果为指示输入的弹窗信息为用户感兴趣的弹窗信息的信息;否则,确定该弹窗信息对应 的输出结果为指示输入的弹窗信息不是用户感兴趣的弹窗信息的信息。
第一次进入本步骤处理时,当前参数集为θ 1,后续再次进入本步骤处理时,当前参数集θ i为对上一次使用的参数集θ i-1进行调整后得到的,详见后续描述。
步骤05,根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值。
一个例子中,可以使用均方误差(Mean Squared Error,MSE)公式作为损失函数,得到损失值L(θ i),详见如下公式:
Figure PCTCN2018088249-appb-000001
其中,H表示单次训练中从预设训练集中选取的弹窗信息个数,I j表示第j个弹窗信息的特征向量,F(I ji)表示针对第j个弹窗信息,深度神经网络在参数集θ i下步骤04得到的输出结果,X j表示第j个弹窗信息的标签,i为当前已执行步骤04的次数计数。
步骤06,根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;若不收敛,则执行步骤07;若收敛,则执行步骤08。
例如,可以当损失值小于预设损失值阈值时,确定收敛;也可以当本次计算得到损失值与上一次计算得到的损失值之差小于预设变化阈值时,确定收敛,本申请实施例在此不做限定。
步骤07,调整深度神经网络的参数,将调整后的参数作为目标参数,返回执行步骤04。
具体可以利用反向传播算法对当前参数集θ i中的参数进行调整,得到调整后的参数集。
步骤08,将采用目标参数的深度神经网络作为弹窗管理模型。
具体的,将当前参数集θ i作为输出的最终参数集θ final,并将采用最终参数集θ final的深度神经网络,作为训练完成的弹窗管理模型。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗 信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
基于相同的发明构思,依据上述训练获得的弹窗管理模型,本申请实施例提供了一种信息处理方法。
参考图4,图4为本申请实施例提供的信息处理方法的第一种流程示意图,该方法包括如下步骤。
S401:获取发送给电子设备的待处理弹窗信息。
步骤S401即为获取待处理弹窗信息。上述发送给电子设备的待处理弹窗信息即为,待发送给目标电子设备的待处理弹窗信息。
在本申请的一个实施例中,可以通过截获其他设备向电子设备发送的弹性信息确定待处理弹窗信息。
例如,在电子设备运行应用程序时,可以获取到该应用程序的服务器向该电子设备发送针对该应用程序的弹窗信息,将获取到的弹窗信息作为待处理弹窗信息。
再例如,电子设备为手机,其他手机向一个手机拨打电话,此时可以获取陌生的来电,将获取到的陌生的来电作为待处理弹窗信息。
另外,若电子设备为手机,其他手机向一个手机发送短信消息,此时可以获取该短信消息,将获取到的短信消息作为待处理弹窗信息。
S402:将待处理弹窗信息输入弹窗管理模型。
其中,弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型,用户感兴趣的弹窗信息为关注度大于阈值的信息。
在本申请的一个实施例中,上述弹窗管理模型可以通过以下方式训练获得:
基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:是否为用户感兴趣的弹窗信息;
获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用获得的特征向量和标签,训练弹窗管理模型。
在本申请的另一个实施例中,弹窗管理模型可以通过以下方式训练获得:
获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数;
将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
若不收敛,则调整深度神经网络的参数,将调整后的参数作为目标参数,返回执行将训练集包括的每个弹窗信息输入深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用目标参数的深度神经网络作为弹窗管理模型。
在本申请的一个实施例中,将训练集中的弹窗信息转换为特征向量的步骤,包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
在本申请的一个实施例中,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
在本申请的一个实施例中,用户感兴趣的弹窗信息为用户查看的弹窗信息。
在本申请的一个实施例中,训练集通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
上述弹窗管理模型的训练可参考图1所示的实施例。
S403:若弹窗管理模型的输出结果为待处理弹窗信息为用户感兴趣的弹窗信息,将待处理弹窗信息发送给电子设备。
步骤S403即为若弹窗管理模型的输出结果为指示待处理弹窗信息为用户感兴趣的弹窗信息的信息,将待处理弹窗信息发送给目标电子设备。
目标电子设备通过弹窗功能显示待处理弹窗信息,该待处理弹窗信息为用户感兴趣的弹窗信息,显示该待处理弹窗信息,便于用户查看,提高了用户的体验。
在本申请的一个实施例中,参考图5,图5为本申请实施例提供的信息处理方法的第二种流程示意图,基于图4,该方法还可以包括:
S404:若弹窗管理模型的输出结果为待处理弹窗信息为用户不感兴趣的弹窗信息,拒绝将待处理弹窗信息发送给电子设备。
步骤S404即为若弹窗管理模型的输出结果为指示待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将待处理弹窗信息发送给目标电子设备。
在本申请的一个实施例中,拒绝将待处理弹窗信息发送给目标电子设备 可以为:丢弃该待处理弹窗信息,以避免占用过多的存储空间。
在本申请的一个实施例中,拒绝将待处理弹窗信息发送给目标电子设备可以为:拦截下待处理弹窗信息,不将该待处理信息发送给目标电子设备,并记录下该待处理弹窗信息。之后,可以周期的向目标电子设备发送提示信息,告知拦截了多少的弹窗信息,以便于用户及时处理记录下的弹窗信息。
在本申请的一个实施例中,记录下待处理弹窗信息时,还可以记录下该待处理弹窗信息的特征信息,如某一号码的来电、针对某一应用程序的弹窗信息、针对天气的短信消息等。这种情况下,可以周期的向电子设备发送提示信息,该提示信息中携带有记录下的待处理弹窗信息的特征信息,基于该特征信息,用户可以及时确定拦截下待处理弹窗信息是否为用户感兴趣的弹窗信息;若是用户感兴趣的弹窗信息,则及时获取到该待处理弹窗信息。
在本申请的一个实施例中,在将待处理弹窗信息输入到弹窗管理模型之后,获得弹窗管理模型的输出结果,此时,可以将该输出结果与待处理弹窗信息的对应关系加入训练集,丰富训练集中包括的弹窗信息,以便再次训练弹窗管理模型。
应用上述实施例,基于深度神经网络构建了弹窗管理模型,该弹窗管理模型用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息;这种情况下,将获取到的待处理弹窗信息输入弹窗管理模型中,若该弹窗管理模型的输出结果显示为:待处理弹窗信息为用户感兴趣的弹窗信息,再将待处理弹窗信息发送给电子设备,电子设备通过弹窗功能显示待处理弹窗信息。这样,有效地减少了电子设备接收到的不感兴趣的弹窗信息的数量,解决了电子设备显示大量用户不感兴趣的弹窗信息的问题,提高了用户体验。
与方法实施例对应,本申请实施例还提供了一种信息处理装置和模型训练装置。
参考图6,图6为本申请实施例提供的模型训练装置的一种结构示意图,该装置包括:
构建单元601,用于基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;用户感兴趣的 弹窗信息为关注度大于阈值的信息;
转换单元602,用于获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
训练单元603,用于使用特征向量和标签,训练弹窗管理模型。
上述构建单元601即为构建模块,转换单元即为转换模块,训练单元即为训练模块。
可选的,转换单元602,具体可以用于:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,上述模型训练装置还可以包括:确定单元,用于确定训练集;
这种情况下,确定单元可以包括:
发送子单元,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收子单元,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
加入子单元,用于根据接收的对应关系确定训练集。
上述确定单元即为确定模块,发送子单元即为发送子模块,接收子单元即为接收子模块,加入子单元即为加入子模块。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗 信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
基于相同的发明构思,根据上述模型训练方法实施例,本申请实施例还提供了一种模型训练装置。该装置包括:
第一获取模块,用于获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
转换模块,用于将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
第二获取模块,用于获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数;
输入模块,用于将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
计算模块,用于根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
判断模块,用于根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
处理模块,用于若判断模块的判断结果为否,则调整深度神经网络的参数,将调整后的参数作为目标参数;若判断模块的判断结果为是,则将采用目标参数的深度神经网络作为弹窗管理模型。
可选的,转换模块,具体可以用于:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,上述模型训练装置还可以包括:确定模块,用于确定训练集;确定模块可包括:
发送子模块,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收子模块,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
加入子模块,用于根据接收的对应关系确定训练集。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
基于相同的发明构思,根据上述信息处理方法实施例,本申请实施例还提供了一种信息处理装置。参考图7,图7为本申请实施例提供的信息处理装置的第一种结构示意图,该装置包括:
获取单元701,用于获取待处理弹窗信息;
输入单元702,用于将待处理弹窗信息输入弹窗管理模型;弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型;用户感兴趣的弹窗信息为关注度大于阈值的信息;
发送单元703,用于若弹窗管理模型的输出结果为指示待处理弹窗信息为用户感兴趣的弹窗信息的信息,将待处理弹窗信息发送给目标电子设备,以使目标电子设备通过弹窗功能显示待处理弹窗信息。
上述获取单元701即为获取模块,输入单元702即为输入模块,发送单元703即为发送模块。
可选的,参考图8所示的信息处理装置的第二种结构示意图,基于图7,该装置还可以包括:
拒绝单元704,用于若弹窗管理模型的输出结果为指示待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将待处理弹窗信息发送给目标电子设备。
上述拒绝单元704即为拒绝模块。
可选的,上述信息处理装置还可以包括:训练单元,用于训练获得弹窗管理模型;这种情况下,训练单元可以包括:
构建子单元,用于基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;
转换子单元,用于获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
训练子单元,用于使用特征向量和标签,训练弹窗管理模型。
上述训练单元即为训练模块,构建子单元即为构建子模块,转换子单元即为转换子模块,训练子单元即为训练子模块。
可选的,上述信息处理装置还可以包括:训练模块,用于训练获得弹窗管理模型;这种情况下,训练模块可以包括:
第一获取子模块,用于获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
转换子模块,用于将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
第二获取子模块,用于获取预设的深度神经网络,初始化深度神经网络 的参数作为目标参数;
输入子模块,用于将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
计算子模块,用于根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
判断子模块,用于根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
处理子模块,用于若判断子模块的判断结果为否,则调整深度神经网络的参数,将调整后的参数作为目标参数;若判断子模块的判断结果为是,则将采用目标参数的深度神经网络作为弹窗管理模型。
可选的,转换子单元,具体可以用于:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,参考图9所示的信息处理装置的第三种结构示意图,基于图7,该装置还可以包括:
加入单元905,用于在将待处理弹窗信息输入弹窗管理模型之后,将弹窗管理模型的输出结果与待处理弹窗信息的对应关系加入训练集。
上述加入单元即为加入模块。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,上述信息处理装置还可以包括:确定单元,用于确定训练集;确定单元可以包括:
发送子单元,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收子单元,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
加入子单元,用于根据接收的对应关系确定训练集。
上述确定单元即为确定模块,发送子单元即为发送子模块,接收子单元即为接收子模块,加入子单元即为加入子模块。
应用上述实施例,基于深度神经网络构建了弹窗管理模型,该弹窗管理模型用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息;这种情况下,将获取到的待处理弹窗信息输入弹窗管理模型中,若该弹窗管理模型的输出结果显示为:待处理弹窗信息为用户感兴趣的弹窗信息,再将待处理弹窗信息发送给电子设备,电子设备通过弹窗功能显示待处理弹窗信息。这样,有效地减少了电子设备接收到的不感兴趣的弹窗信息的数量,解决了电子设备显示大量用户不感兴趣的弹窗信息的问题,提高了用户体验。
与模型训练方法实施例对应,本申请实施例还提供了一种电子设备,如图10所示,包括处理器1001、通信接口1002、存储器1003和通信总线1004,其中,处理器1001、通信接口1002、存储器1003通过通信总线1004完成相互间的通信;
存储器1003,用于存放计算机程序;
处理器1001,用于执行存储器1003上所存放的程序时,实现模型训练方法。其中,模型训练方法包括:
基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;用户感兴趣的弹窗信息为关注度大于阈值的信息;
获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用特征向量和标签,训练弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
与模型训练方法实施例对应,本申请实施例还提供了一种电子设备,包括处理器、通信接口、存储器和通信总线,其中,处理器、通信接口、存储器通过通信总线完成相互间的通信;
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的程序时,实现模型训练方法。其中,模型训练方法包括:
获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息, 标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数;
将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
若不收敛,则调整深度神经网络的参数,将调整后的参数作为目标参数,返回执行将训练集包括的每个弹窗信息输入深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用目标参数的深度神经网络作为弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
与信息处理方法实施例对应,本申请实施例还提供了一种电子设备,如图11所示,包括处理器1101、通信接口1102、存储器1103和通信总线1104,其中,处理器1101、通信接口1102、存储器1103通过通信总线1104完成相互间的通信;
存储器1103,用于存放计算机程序;
处理器1101,用于执行存储器1103上所存放的程序时,实现信息处理方法。信息处理方法包括:
获取待处理弹窗信息;
将待处理弹窗信息输入弹窗管理模型;弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型;用户感兴趣的弹窗信息为关注度大于阈值的信息;
若弹窗管理模型的输出结果为指示待处理弹窗信息为用户感兴趣的弹窗信息的信息,将待处理弹窗信息发送给目标电子设备,以使目标电子设备通过弹窗功能显示待处理弹窗信息。
可选的,信息处理方法还可以包括:
若弹窗管理模型的输出结果为指示待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将待处理弹窗信息发送给目标电子设备。
可选的,弹窗管理模型可以通过以下方式训练获得:
基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;
获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用特征向量和标签,训练弹窗管理模型。
可选的,弹窗管理模型可以通过以下方式训练获得:
获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数;
将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
若不收敛,则调整深度神经网络的参数,将调整后的参数作为目标参数,返回执行将训练集包括的每个弹窗信息输入深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用目标参数的深度神经网络作为弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,在将待处理弹窗信息输入弹窗管理模型的步骤之后,还可以包括:
将弹窗管理模型的输出结果与待处理弹窗信息的对应关系加入训练集。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,基于深度神经网络构建了弹窗管理模型,该弹窗管理模型用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息;这种情况下,将获取到的待处理弹窗信息输入弹窗管理模型中,若该弹窗管理模型的输出结果显示为:待处理弹窗信息为用户感兴趣的弹窗信息,在将待处理弹窗信息发送给电子设备,电子设备通过弹窗功能显示待处理弹窗信息。这样,有效地减少了电子设备接收到的不感兴趣的弹窗信息的数量,解决了电子设备显示大量用户不感兴趣的弹窗信息的问题,提高了用户体验。
上述通信总线可以是PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。
上述通信接口用于上述电子设备与其他设备之间的通信。
上述存储器可以包括RAM(Random Access Memory,随机存取存储器),也可以包括NVM(Non-Volatile Memory,非易失性存储器),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述处理器可以是通用处理器,包括CPU(Central Processing Unit,中央处理器)、NP(Network Processor,网络处理器)等;还可以是DSP(Digital Signal Processing,数字信号处理器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
与模型训练方法实施例对应,本申请实施例还提供了一种存储介质,存储介质内存储有计算机程序,计算机程序被处理器执行时,实现模型训练方法。模型训练方法包括:
基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;用户感兴趣的弹窗信息为关注度大于阈值的信息;
获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用特征向量和标签,训练弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
与模型训练方法实施例对应,本申请实施例还提供了一种存储介质,存储介质内存储有计算机程序,计算机程序被处理器执行时,实现模型训练方法。模型训练方法包括:
获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数;
将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
若不收敛,则调整深度神经网络的参数,将调整后的参数作为目标参数,返回执行将训练集包括的每个弹窗信息输入深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用目标参数的深度神经网络作为弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
与信息处理方法实施例对应,本申请实施例还提供了一种存储介质,存储介质内存储有计算机程序,计算机程序被处理器执行时,实现信息处理方法。信息处理方法包括:
获取待处理弹窗信息;
将待处理弹窗信息输入弹窗管理模型;弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型;用户感兴趣的弹窗信息为关注度大于阈值的信息;
若弹窗管理模型的输出结果为指示待处理弹窗信息为用户感兴趣的弹窗信息的信息,将待处理弹窗信息发送给目标电子设备,以使目标电子设备通 过弹窗功能显示待处理弹窗信息。
可选的,上述信息处理方法还可以包括:
若弹窗管理模型的输出结果为指示待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将待处理弹窗信息发送给目标电子设备。
可选的,弹窗管理模型可以通过以下方式训练获得:
基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;
获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用特征向量和标签,训练弹窗管理模型。
可选的,弹窗管理模型可以通过以下方式训练获得:
获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数;
将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
若不收敛,则调整深度神经网络的参数,将调整后的参数作为目标参数,返回执行将训练集包括的每个弹窗信息输入深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用目标参数的深度神经网络作为弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,在将待处理弹窗信息输入弹窗管理模型的步骤之后,还可以包括:
将弹窗管理模型的输出结果与待处理弹窗信息的对应关系加入训练集。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,基于深度神经网络构建了弹窗管理模型,该弹窗管理模型用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息;这种情况下,将获取到的待处理弹窗信息输入弹窗管理模型中,若该弹窗管理模型的输出结果显示为:待处理弹窗信息为用户感兴趣的弹窗信息,再将待处理弹窗信息发送给电子设备,电子设备通过弹窗功能显示待处理弹窗信息。这样,有效地减少了电子设备接收到的不感兴趣的弹窗信息的数量,解决了电子设备显示大量用户不感兴趣的弹窗信息的问题,提高了用户体验。
与模型训练方法实施例对应,本申请实施例还提供了一种计算机程序,计算机程序被处理器执行时实现模型训练方法。模型训练方法包括:
基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;用户感兴趣的弹窗信息为关注度大于阈值的信息;
获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用特征向量和标签,训练弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
与模型训练方法实施例对应,本申请实施例还提供了一种计算机程序,计算机程序被处理器执行时实现模型训练方法。模型训练方法包括:
获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数;
将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
若不收敛,则调整深度神经网络的参数,将调整后的参数作为目标参数,返回执行将训练集包括的每个弹窗信息输入深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用目标参数的深度神经网络作为弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,获取包括大量弹窗信息的训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签,依据获得的特征向量和标签,训练弹窗管理模型,进而依据训练获得的弹窗管理模型,能够较为准确的识别出弹窗信息是否为用户感兴趣的弹窗信息。
与信息处理方法实施例对应,本申请实施例还提供了一种计算机程序,计算机程序被处理器执行时实现信息处理方法。信息处理方法包括:
获取待处理弹窗信息;
将待处理弹窗信息输入弹窗管理模型;弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型;用户感兴趣的弹窗信息为关注度大于阈值的信息;
若弹窗管理模型的输出结果为指示待处理弹窗信息为用户感兴趣的弹窗信息的信息,将待处理弹窗信息发送给目标电子设备,以使目标电子设备通过弹窗功能显示待处理弹窗信息。
可选的,上述信息处理方法还可以包括:
若弹窗管理模型的输出结果为指示待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将待处理弹窗信息发送给目标电子设备。
可选的,弹窗管理模型可以通过以下方式训练获得:
基于深度神经网络,构建弹窗管理模型;弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;
获取训练集,将训练集中的弹窗信息转换为特征向量,为训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
使用特征向量和标签,训练弹窗管理模型。
可选的,弹窗管理模型可以通过以下方式训练获得:
获取训练集,训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
将训练集中的弹窗信息转换为特征向量,并为训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
获取预设的深度神经网络,初始化深度神经网络的参数作为目标参数;
将训练集包括的每个弹窗信息的特征向量输入深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
根据每个弹窗信息的输出结果和训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
根据弹窗信息损失值,判断采用目标参数的深度神经网络是否收敛;
若不收敛,则调整深度神经网络的参数,将调整后的参数作为目标参数,返回执行将训练集包括的每个弹窗信息输入深度神经网络,得到每个弹窗信息的输出结果的步骤;
若收敛,则将采用目标参数的深度神经网络作为弹窗管理模型。
可选的,将训练集中的弹窗信息转换为特征向量的步骤,可以包括:
根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
可选的,深度神经网络包括输入层、抽象层和输出层;
其中,深度神经网络的输入层包括的神经元个数与特征向量的维数相同;抽象层的激活函数为ReLu函数;输出层的激活函数为sigmoid函数。
可选的,在将待处理弹窗信息输入弹窗管理模型的步骤之后,还可以包括:
将弹窗管理模型的输出结果与待处理弹窗信息的对应关系加入训练集。
可选的,用户感兴趣的弹窗信息为用户查看的弹窗信息。
可选的,训练集可以通过以下方式确定:
将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
根据接收的对应关系确定训练集。
应用上述实施例,基于深度神经网络构建了弹窗管理模型,该弹窗管理模型用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息;这种情况下,将获取到的待处理弹窗信息输入弹窗管理模型中,若该弹窗管理模型的输出结果显示为:待处理弹窗信息为用户感兴趣的弹窗信息,再将待处理弹窗信息发送给电子设备,电子设备通过弹窗功能显示待处理弹窗信息。这样,有效地减少了电子设备接收到的不感兴趣的弹窗信息的数量,解决了电子设备显示大量用户不感兴趣的弹窗信息的问题,提高了用户体验。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于模型训练装置、信息处理装置、电子设备、存储介质、计算机程序实施例而言,由于其基本相似于模型训练方法、信息处理方法实施例,所以描述的比较简单,相关之处参见模型训练方法、信息处理方法实施例的部分说明即可。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory;以下简称:ROM)、随机存取存储器(Random Access Memory;以下简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (47)

  1. 一种信息处理方法,其特征在于,所述方法包括:
    获取待处理弹窗信息;
    将所述待处理弹窗信息输入弹窗管理模型;所述弹窗管理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型;所述用户感兴趣的弹窗信息为关注度大于阈值的信息;
    若所述弹窗管理模型的输出结果为指示所述待处理弹窗信息为用户感兴趣的弹窗信息的信息,将所述待处理弹窗信息发送给目标电子设备,以使所述目标电子设备通过弹窗功能显示所述待处理弹窗信息。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若所述弹窗管理模型的输出结果为指示所述待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将所述待处理弹窗信息发送给所述目标电子设备。
  3. 根据权利要求1所述的方法,其特征在于,所述弹窗管理模型通过以下方式训练获得:
    基于深度神经网络,构建弹窗管理模型;所述弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;
    获取训练集,将所述训练集中的弹窗信息转换为特征向量,为所述训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
    使用所述特征向量和所述标签,训练所述弹窗管理模型。
  4. 根据权利要求1所述的方法,其特征在于,所述弹窗管理模型通过以下方式训练获得:
    获取训练集,所述训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,所述标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
    将所述训练集中的弹窗信息转换为特征向量,并为所述训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
    获取预设的深度神经网络,初始化所述深度神经网络的参数作为目标参数;
    将所述训练集包括的每个弹窗信息的特征向量输入所述深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
    根据每个弹窗信息的输出结果和所述训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
    根据所述弹窗信息损失值,判断采用所述目标参数的深度神经网络是否收敛;
    若不收敛,则调整所述深度神经网络的参数,将调整后的参数作为目标参数,返回执行所述将所述训练集包括的每个弹窗信息输入所述深度神经网络,得到每个弹窗信息的输出结果的步骤;
    若收敛,则将采用所述目标参数的深度神经网络作为弹窗管理模型。
  5. 根据权利要求3或4所述的方法,其特征在于,所述将所述训练集中的弹窗信息转换为特征向量的步骤,包括:
    根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
  6. 根据权利要求3或4所述的方法,其特征在于,所述深度神经网络包括输入层、抽象层和输出层;
    其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为修正线性单元ReLu函数;所述输出层的激活函数为S型sigmoid函数。
  7. 根据权利要求3或4所述的方法,其特征在于,在所述将所述待处理弹窗信息输入弹窗管理模型的步骤之后,所述方法还包括:
    将所述弹窗管理模型的输出结果与所述待处理弹窗信息的对应关系加入所述训练集。
  8. 根据权利要求3或4所述的方法,其特征在于,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
  9. 根据权利要求8所述的方法,其特征在于,所述训练集通过以下方式确定:
    将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
    接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
    根据接收的对应关系确定训练集。
  10. 一种模型训练方法,其特征在于,所述方法包括:
    基于深度神经网络,构建弹窗管理模型;所述弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;所述用户感兴趣的弹窗信息为关注度大于阈值的信息;
    获取训练集,将所述训练集中的弹窗信息转换为特征向量,为所述训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
    使用所述特征向量和所述标签,训练所述弹窗管理模型。
  11. 根据权利要求10所述的方法,其特征在于,所述将所述训练集中的弹窗信息转换为特征向量的步骤,包括:
    根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
  12. 根据权利要求10所述的方法,其特征在于,所述深度神经网络包括输入层、抽象层和输出层;
    其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的 维数相同;所述抽象层的激活函数为修正线性单元ReLu函数;所述输出层的激活函数为S型sigmoid函数。
  13. 根据权利要求10-12任一项所述的方法,其特征在于,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
  14. 根据权利要求13所述的方法,其特征在于,所述训练集通过以下方式确定:
    将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
    接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
    根据接收的对应关系确定训练集。
  15. 一种模型训练方法,其特征在于,所述方法包括:
    获取训练集,所述训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,所述标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
    将所述训练集中的弹窗信息转换为特征向量,并为所述训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
    获取预设的深度神经网络,初始化所述深度神经网络的参数作为目标参数;
    将所述训练集包括的每个弹窗信息的特征向量输入所述深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
    根据每个弹窗信息的输出结果和所述训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
    根据所述弹窗信息损失值,判断采用所述目标参数的深度神经网络是否 收敛;
    若不收敛,则调整所述深度神经网络的参数,将调整后的参数作为目标参数,返回执行所述将所述训练集包括的每个弹窗信息输入所述深度神经网络,得到每个弹窗信息的输出结果的步骤;
    若收敛,则将采用所述目标参数的深度神经网络作为弹窗管理模型。
  16. 根据权利要求15所述的方法,其特征在于,所述将所述训练集中的弹窗信息转换为特征向量的步骤,包括:
    根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
  17. 根据权利要求15所述的方法,其特征在于,所述深度神经网络包括输入层、抽象层和输出层;
    其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为修正线性单元ReLu函数;所述输出层的激活函数为S型sigmoid函数。
  18. 根据权利要求15-17任一项所述的方法,其特征在于,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
  19. 根据权利要求18所述的方法,其特征在于,所述训练集通过以下方式确定:
    将获取的多个弹窗信息发送给多个电子设备,以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
    接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
    根据接收的对应关系确定训练集。
  20. 一种信息处理装置,其特征在于,所述装置包括:
    获取模块,用于获取待处理弹窗信息;
    输入模块,用于将所述待处理弹窗信息输入弹窗管理模型;所述弹窗管 理模型为:基于深度神经网络构建的、用于确定输入的弹窗信息是否为用户感兴趣的弹窗信息的模型;所述用户感兴趣的弹窗信息为关注度大于阈值的信息;
    发送模块,用于若所述弹窗管理模型的输出结果为指示所述待处理弹窗信息为用户感兴趣的弹窗信息的信息,将所述待处理弹窗信息发送给目标电子设备,以使所述目标电子设备通过弹窗功能显示所述待处理弹窗信息。
  21. 根据权利要求20所述的装置,其特征在于,所述装置还包括:
    拒绝模块,用于若所述弹窗管理模型的输出结果为指示所述待处理弹窗信息为用户不感兴趣的弹窗信息的信息,拒绝将所述待处理弹窗信息发送给所述目标电子设备。
  22. 根据权利要求20所述的装置,其特征在于,所述装置还包括:训练模块,用于训练获得所述弹窗管理模型;所述训练模块包括:
    构建子模块,用于基于深度神经网络,构建弹窗管理模型;所述弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;
    转换子模块,用于获取训练集,将所述训练集中的弹窗信息转换为特征向量,为所述训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
    训练子模块,用于使用所述特征向量和所述标签,训练所述弹窗管理模型。
  23. 根据权利要求20所述的装置,其特征在于,所述装置还包括:训练模块,用于训练获得所述弹窗管理模型;所述训练模块包括:
    第一获取子模块,用于获取训练集,所述训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,所述标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
    转换子模块,用于将所述训练集中的弹窗信息转换为特征向量,并为所述训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
    第二获取子模块,用于获取预设的深度神经网络,初始化所述深度神经 网络的参数作为目标参数;
    输入子模块,用于将所述训练集包括的每个弹窗信息的特征向量输入所述深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
    计算子模块,用于根据每个弹窗信息的输出结果和所述训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
    判断子模块,用于根据所述弹窗信息损失值,判断采用所述目标参数的深度神经网络是否收敛;
    处理子模块,用于若所述判断子模块的判断结果为否,则调整所述深度神经网络的参数,将调整后的参数作为目标参数;若所述判断子模块的判断结果为是,则将采用所述目标参数的深度神经网络作为弹窗管理模型。
  24. 根据权利要求22或23所述的装置,其特征在于,所述转换子模块,具体用于:
    根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
  25. 根据权利要求22或23所述的装置,其特征在于,所述深度神经网络包括输入层、抽象层和输出层;
    其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为修正线性单元ReLu函数;所述输出层的激活函数为S型sigmoid函数。
  26. 根据权利要求22或23所述的装置,其特征在于,所述装置还包括:
    加入模块,用于在将所述待处理弹窗信息输入弹窗管理模型之后,将所述弹窗管理模型的输出结果与所述待处理弹窗信息的对应关系加入所述训练集。
  27. 根据权利要求22或23所述的装置,其特征在于,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
  28. 根据权利要求22或23所述的装置,其特征在于,所述装置还包括:确定模块,用于确定训练集;所述确定模块包括:
    发送子模块,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备分别通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
    接收子模块,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
    加入子模块,用于根据接收的对应关系确定训练集。
  29. 一种模型训练装置,其特征在于,所述装置包括:
    构建模块,用于基于深度神经网络,构建弹窗管理模型;所述弹窗管理模型的建模单元为:指示是否为用户感兴趣的弹窗信息的信息;所述用户感兴趣的弹窗信息为关注度大于阈值的信息;
    转换模块,用于获取训练集,将所述训练集中的弹窗信息转换为特征向量,为所述训练集中的弹窗信息标记用户感兴趣的弹窗信息或用户不感兴趣的弹窗信息的标签;
    训练模块,用于使用所述特征向量和所述标签,训练所述弹窗管理模型。
  30. 根据权利要求29所述的装置,其特征在于,所述转换模块,具体用于:
    根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
  31. 根据权利要求29所述的装置,其特征在于,所述深度神经网络包括输入层、抽象层和输出层;
    其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为修正线性单元ReLu函数;所述输出层的激活函数为S型sigmoid函数。
  32. 根据权利要求29-31任一项所述的装置,其特征在于,所述用户感兴 趣的弹窗信息为用户查看的弹窗信息。
  33. 根据权利要求32所述的装置,其特征在于,所述装置还包括:确定模块,用于确定训练集;所述确定模块包括:
    发送子模块,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
    接收子模块,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
    加入子模块,用于根据接收的对应关系确定训练集。
  34. 一种模型训练装置,其特征在于,所述装置包括:
    第一获取模块,用于获取训练集,所述训练集包括多个弹窗信息和多个弹窗信息对应的标签信息,所述标签信息为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
    转换模块,用于将所述训练集中的弹窗信息转换为特征向量,并为所述训练集中的弹窗信息标记该弹窗信息对应的标签信息的标签;
    第二获取模块,用于获取预设的深度神经网络,初始化所述深度神经网络的参数作为目标参数;
    输入模块,用于将所述训练集包括的每个弹窗信息的特征向量输入所述深度神经网络,得到每个弹窗信息的输出结果;每个弹窗信息的输出结果为指示弹窗信息为用户感兴趣的弹窗信息的信息或指示弹窗信息不是用户感兴趣的弹窗信息的信息;
    计算模块,用于根据每个弹窗信息的输出结果和所述训练集包括的该弹窗信息对应的标签信息,计算弹窗信息损失值;
    判断模块,用于根据所述弹窗信息损失值,判断采用所述目标参数的深度神经网络是否收敛;
    处理模块,用于若所述判断模块的判断结果为否,则调整所述深度神经 网络的参数,将调整后的参数作为目标参数;若所述判断模块的判断结果为是,则将采用所述目标参数的深度神经网络作为弹窗管理模型。
  35. 根据权利要求34所述的装置,其特征在于,所述转换模块,具体用于:
    根据显示时间、显示延迟时长、显示地点、用户所使用的电子设备的规格,将训练集中的弹窗信息转换为特征向量。
  36. 根据权利要求34所述的装置,其特征在于,所述深度神经网络包括输入层、抽象层和输出层;
    其中,所述深度神经网络的输入层包括的神经元个数与所述特征向量的维数相同;所述抽象层的激活函数为修正线性单元ReLu函数;所述输出层的激活函数为S型sigmoid函数。
  37. 根据权利要求34-36任一项所述的装置,其特征在于,所述用户感兴趣的弹窗信息为用户查看的弹窗信息。
  38. 根据权利要求37所述的装置,其特征在于,所述装置还包括:确定模块,用于确定训练集;所述确定模块包括:
    发送子模块,用于将获取的多个弹窗信息发送给多个电子设备;以使多个电子设备通过弹窗功能显示接收的弹窗信息,并记录用户是否查看接收的弹窗信息;
    接收子模块,用于接收多个电子设备返回的弹窗信息与用户是否查看该弹窗信息的对应关系;
    加入子模块,用于根据接收的对应关系确定训练集。
  39. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,所述处理器、所述通信接口、所述存储器通过所述通信总线完成相互间的通信;
    所述存储器,用于存放计算机程序;
    所述处理器,用于执行所述存储器上所存放的程序,实现权利要求1-9 任一所述的方法步骤。
  40. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,所述处理器、所述通信接口、所述存储器通过所述通信总线完成相互间的通信;
    所述存储器,用于存放计算机程序;
    所述处理器,用于执行所述存储器上所存放的程序,实现权利要求10-14任一所述的方法步骤。
  41. 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,所述处理器、所述通信接口、所述存储器通过所述通信总线完成相互间的通信;
    所述存储器,用于存放计算机程序;
    所述处理器,用于执行所述存储器上所存放的程序,实现权利要求15-19任一所述的方法步骤。
  42. 一种存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-9任一所述的方法步骤。
  43. 一种存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求10-14任一所述的方法步骤。
  44. 一种存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求15-19任一所述的方法步骤。
  45. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-9任一所述的方法步骤。
  46. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求10-14任一所述的方法步骤。
  47. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求15-19任一所述的方法步骤。
PCT/CN2018/088249 2017-06-30 2018-05-24 信息处理和模型训练方法、装置、电子设备、存储介质 WO2019001185A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/480,925 US20200167645A1 (en) 2017-06-30 2018-05-24 Information processing and model training methods, apparatuses, electronic devices, and storage mediums

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710525652.0A CN107402754A (zh) 2017-06-30 2017-06-30 信息处理和模型训练方法、装置、电子设备、存储介质
CN201710525652.0 2017-06-30

Publications (1)

Publication Number Publication Date
WO2019001185A1 true WO2019001185A1 (zh) 2019-01-03

Family

ID=60405158

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/088249 WO2019001185A1 (zh) 2017-06-30 2018-05-24 信息处理和模型训练方法、装置、电子设备、存储介质

Country Status (3)

Country Link
US (1) US20200167645A1 (zh)
CN (1) CN107402754A (zh)
WO (1) WO2019001185A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107402754A (zh) * 2017-06-30 2017-11-28 北京金山安全软件有限公司 信息处理和模型训练方法、装置、电子设备、存储介质
CN108763159A (zh) * 2018-05-22 2018-11-06 中国科学技术大学苏州研究院 一种基于fpga的lstm前向运算加速器
CN108898015B (zh) * 2018-06-26 2021-07-27 暨南大学 基于人工智能的应用层动态入侵检测系统及检测方法
CN109189528B (zh) * 2018-08-14 2021-11-30 上海尚往网络科技有限公司 弹窗展示的控制方法、弹窗展示方法
CN109508218B (zh) * 2018-10-25 2023-12-15 平安科技(深圳)有限公司 App消息推送展示控制方法、装置、设备及存储介质
CN113538225A (zh) * 2020-04-14 2021-10-22 阿里巴巴集团控股有限公司 模型训练方法及图像转换方法、装置、设备和存储介质
CN111700718B (zh) * 2020-07-13 2023-06-27 京东科技信息技术有限公司 一种识别握姿的方法、装置、假肢及可读存储介质
CN113687890B (zh) * 2021-07-13 2022-12-06 荣耀终端有限公司 弹窗管理方法、装置及存储介质
CN113868542B (zh) * 2021-11-25 2022-03-11 平安科技(深圳)有限公司 基于注意力模型的推送数据获取方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303105A (zh) * 2015-10-20 2016-02-03 珠海市君天电子科技有限公司 窗口消息拦截方法、装置和终端设备
CN106027633A (zh) * 2016-05-16 2016-10-12 百度在线网络技术(北京)有限公司 应用推送方法、应用推送系统及终端设备
CN106126562A (zh) * 2016-06-15 2016-11-16 广东欧珀移动通信有限公司 一种弹窗拦截方法及终端
CN107402754A (zh) * 2017-06-30 2017-11-28 北京金山安全软件有限公司 信息处理和模型训练方法、装置、电子设备、存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102752730B (zh) * 2012-07-19 2014-04-16 腾讯科技(深圳)有限公司 消息处理的方法及装置
CN104484390A (zh) * 2014-12-11 2015-04-01 哈尔滨工程大学 一种面向微博的僵尸粉丝检测方法
CN106020814A (zh) * 2016-05-16 2016-10-12 北京奇虎科技有限公司 通知栏消息的处理方法、装置及移动终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303105A (zh) * 2015-10-20 2016-02-03 珠海市君天电子科技有限公司 窗口消息拦截方法、装置和终端设备
CN106027633A (zh) * 2016-05-16 2016-10-12 百度在线网络技术(北京)有限公司 应用推送方法、应用推送系统及终端设备
CN106126562A (zh) * 2016-06-15 2016-11-16 广东欧珀移动通信有限公司 一种弹窗拦截方法及终端
CN107402754A (zh) * 2017-06-30 2017-11-28 北京金山安全软件有限公司 信息处理和模型训练方法、装置、电子设备、存储介质

Also Published As

Publication number Publication date
CN107402754A (zh) 2017-11-28
US20200167645A1 (en) 2020-05-28

Similar Documents

Publication Publication Date Title
WO2019001185A1 (zh) 信息处理和模型训练方法、装置、电子设备、存储介质
US10650311B2 (en) Suggesting resources using context hashing
US11216510B2 (en) Processing an incomplete message with a neural network to generate suggested messages
US20210182611A1 (en) Training data acquisition method and device, server and storage medium
US10645055B1 (en) Trend detection for content targeting using an information distribution system
US20190364123A1 (en) Resource push method and apparatus
CN110046299B (zh) 用于自动地执行隐式消息搜索的计算机化系统和方法
US9356901B1 (en) Determining message prominence
US20200134674A1 (en) Method and device for pushing information
US9218568B2 (en) Disambiguating data using contextual and historical information
CN108139951A (zh) 针对在通知数据之间的亲和度的通知捆集
WO2020073673A1 (zh) 一种文本分析方法及终端
EP3093780A1 (en) State-dependent query response
US20170364947A1 (en) System and method for event triggered search results
CN110321845B (zh) 一种从视频中提取表情包的方法、装置及电子设备
US10757053B2 (en) High confidence digital content treatment
CN110034998B (zh) 控制电子消息及其在传递之后的响应的计算机系统和方法
EP3420473A1 (en) Expert detection in social networks
CN108965951B (zh) 广告的播放方法及装置
US9262550B2 (en) Processing semi-structured data
CN104462051A (zh) 分词方法及装置
US20210124771A1 (en) Computerized system and method for interest profile generation and digital content dissemination based therefrom
US20140214621A1 (en) Method and device for pushing information
WO2018228271A1 (zh) 一种数据存储及调用方法及装置
CN110321546B (zh) 账号识别、显示方法、装置、服务器、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18824147

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18824147

Country of ref document: EP

Kind code of ref document: A1