WO2019128552A1 - 信息推送方法、装置、终端及存储介质 - Google Patents

信息推送方法、装置、终端及存储介质 Download PDF

Info

Publication number
WO2019128552A1
WO2019128552A1 PCT/CN2018/116602 CN2018116602W WO2019128552A1 WO 2019128552 A1 WO2019128552 A1 WO 2019128552A1 CN 2018116602 W CN2018116602 W CN 2018116602W WO 2019128552 A1 WO2019128552 A1 WO 2019128552A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
target
identifier
recommendation information
terminal
Prior art date
Application number
PCT/CN2018/116602
Other languages
English (en)
French (fr)
Inventor
陈岩
刘耀勇
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019128552A1 publication Critical patent/WO2019128552A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Definitions

  • the embodiments of the present invention relate to the field of terminal technologies, and in particular, to an information push method, device, terminal, and storage medium.
  • Information push refers to the process of pushing a recommendation message to a target user group.
  • the server when pushing information to a terminal, the server first acquires user data of the terminal, and the user data includes user attribute information and user behavior data, and filters out a recommendation message matching the user data based on the user data, and the recommendation is The message is pushed to the terminal; correspondingly, the terminal receives and displays the recommendation message.
  • the embodiment of the present invention provides an information pushing method, device, terminal, and storage medium, which can be used to solve the problem that the recommendation information is less effective.
  • the technical solution is as follows:
  • an information pushing method is provided, where the method includes:
  • the ambient audio data being used to indicate a sound signal of a scene in which the terminal is located;
  • the target scene identifier by using the scene classification model, where the target scene identifier is used to indicate a scene type of the scene where the terminal is located;
  • the target recommendation information corresponding to the target scene identifier is pushed according to the first preset correspondence, and the first preset correspondence relationship includes a correspondence between the scene identifier and the recommendation information.
  • an information pushing apparatus comprising:
  • a first acquiring module configured to acquire environment audio data, where the ambient audio data is used to indicate a sound signal of a scene where the terminal is located;
  • a second acquiring module configured to acquire a scene classification model, where the scene classification model is used to represent a scene classification rule obtained by training based on sample environment audio data;
  • a calculation module configured to calculate, according to the environment audio data, a target scene identifier by using the scene classification model, where the target scene identifier is used to indicate a scene type of a scene in which the terminal is located;
  • the pushing module is configured to: according to the first preset correspondence, push the target recommendation information corresponding to the target scene identifier, where the first preset correspondence relationship includes a correspondence between the scene identifier and the recommendation information.
  • a terminal including a processor, a memory coupled to the processor, and program instructions stored on the memory, the processor executing the The information push method as described in any of the first aspect of the present application and its optional embodiments is implemented at the time of program instruction.
  • a computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the first aspect of the present application and its optional embodiments An information push method as described.
  • FIG. 1 is a schematic structural diagram of an information recommendation system according to an embodiment of the present application.
  • FIG. 2 is a flowchart of a method for pushing information according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a principle involved in an information pushing method according to an embodiment of the present application.
  • FIG. 4 is a flowchart of a method for pushing information according to another embodiment of the present application.
  • FIG. 5 is a schematic diagram of a principle involved in an information pushing method according to another embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an information pushing apparatus according to an embodiment of the present application.
  • FIG. 7 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • Scene classification model It is a mathematical model for determining the scene identifier of the scene in which the terminal is located based on the input data.
  • the first fraction prediction model includes, but is not limited to, a Deep Neural Network (DNN) model, a Recurrent Neural Networks (RNN) model, an embedding model, and a gradient elevation decision tree (Gradient).
  • DNN Deep Neural Network
  • RNN Recurrent Neural Networks
  • GDT gradient elevation decision tree
  • Boosting Decision Tree At least one of a model, Logistic Regression (LR) model.
  • the DNN model is a deep learning framework.
  • the DNN model includes an input layer, at least one hidden layer (or intermediate layer), and an output layer.
  • the input layer, the at least one hidden layer (or intermediate layer), and the output layer each include at least one neuron, and the neuron is configured to process the received data.
  • the number of neurons between different layers may be the same; or it may be different.
  • the RNN model is a neural network with a feedback structure.
  • the output of the neuron can be directly applied to itself at the next timestamp, ie, the input of the i-th layer neuron at time m, in addition to the output of the (i-1) layer neuron at that time, Its own output at time (m-1).
  • the embedding model is based on entity and relational distributed vector representations, and the relationship in each triple instance is treated as a translation from the entity header to the entity tail.
  • the instance of the triple includes the subject, the relationship, and the object, and the instance of the triple can be represented as (subject, relationship, object); the subject is the entity header, and the object is the entity tail.
  • Xiao Zhang s father is a big Zhang, which is represented by a triad instance (Xiao Zhang, Dad, Da Zhang).
  • the GBDT model is an iterative decision tree algorithm consisting of multiple decision trees, and the results of all trees are added together as the final result.
  • Each node of the decision tree gets a predicted value, taking the age as an example.
  • the predicted value is the average of the ages of all the people belonging to the node corresponding to the age.
  • the LR model refers to a model built on a linear regression based on a logic function.
  • the embodiment of the present application provides a scenario for determining a scenario type of a scenario in which a terminal is located based on environment audio data, thereby pushing recommendation information that conforms to the scenario type.
  • An embodiment of the present application provides an information pushing method, where the method includes:
  • the ambient audio data is used to indicate a sound signal of a scene in which the terminal is located;
  • the target scene identifier is obtained by using the scene classification model, and the target scene identifier is used to indicate the scene type of the scene where the terminal is located;
  • the target recommendation information corresponding to the target scene identifier is pushed according to the first preset correspondence, and the first preset correspondence relationship includes a correspondence between the scene identifier and the recommendation information.
  • the target scene identifier is calculated by using the scene classification model, including:
  • the scene classification model is trained according to at least one set of sample data sets, and each set of sample data sets includes: sample environment audio data and a pre-labeled correct scene identifier.
  • the scene classification model is obtained, including:
  • training sample set comprising at least one set of sample data sets, each set of sample data sets comprising: sample environment audio data and a pre-labeled correct scene identifier;
  • the original parameter model is trained by the error back propagation algorithm to obtain the scene classification model.
  • the original parameter model is trained by using an error back propagation algorithm to obtain a scene classification model, including:
  • the scene classification model is trained by using an error back propagation algorithm according to the corresponding computational losses of at least one set of sample data sets.
  • the method further includes:
  • the scene classification model is trained according to the updated training sample set, and the updated scene classification model is obtained.
  • obtain environmental audio data including:
  • the scene detection function is enabled
  • Environmental audio data is generated based on m kinds of sound signals.
  • the target recommendation information corresponding to the target scene identifier is pushed according to the first preset correspondence, including:
  • the target recommendation information corresponding to the target scene identifier is obtained according to the first preset correspondence relationship
  • the target recommendation information corresponding to the target scenario identifier is obtained according to the first preset correspondence, including:
  • the transportation hub includes at least one of a bus station, a subway station, a railway station, and an airport; or
  • the scene type indicated by the target scene identifier is a quiet area
  • determining that the target recommendation information is light music information, and the quiet area includes at least one of a library, a museum, a hospital, and a court; or
  • the target recommendation information is determined as the travel guide information.
  • the target recommendation information corresponding to the target scene identifier is pushed according to the first preset correspondence, including:
  • real-time geographic location information of the terminal where real-time geographic location information is used to indicate a target area where the terminal is currently located, and the target area includes k candidate locations, where k is a positive integer;
  • the target recommendation information corresponding to the designated location is pushed, and the second preset correspondence relationship includes a correspondence between the candidate location and the recommendation information.
  • FIG. 1 is a schematic structural diagram of an information recommendation system according to an embodiment of the present application.
  • the information recommendation system includes a payer terminal 120, a server cluster 140, and at least one user terminal 160.
  • a payer client is running in the payer terminal 120.
  • the payer terminal 120 can be a mobile phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer III), and an MP4 (Moving Picture Experts Group Audio Layer IV) motion picture. Experts compress standard audio layers 4) players, laptops and desktop computers, and more.
  • the publisher client is a software client that serves recommendation information on the information recommendation platform.
  • the information recommendation platform is a platform for directing recommendation information to the target user client.
  • the recommendation information is information with recommended value such as advertisement information, multimedia information or consultation information.
  • a sniper is a user or organization that posts recommendation information on an information recommendation platform.
  • the recommendation information is advertising information
  • the advertiser is the advertiser.
  • the payer terminal 120 and the server cluster 140 are connected by a communication network.
  • the communication network is a wired network or a wireless network.
  • Server cluster 140 is a server, or a number of servers, or a virtualization platform, or a cloud computing service center.
  • the server cluster 140 includes a server for implementing an information recommendation platform.
  • the information recommendation platform includes: a server for transmitting recommendation information to the user terminal 160.
  • the server cluster 140 and the user terminal 160 are connected by a communication network.
  • the communication network is a wired network or a wireless network.
  • a user client is run in the user terminal 160, and a user account is registered in the user client.
  • the user terminal 160 can also be a mobile phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer III), and an MP4 (Moving Picture Experts Group Audio Layer IV) motion picture. Experts compress standard audio layers 4) players, laptops and desktop computers, and more.
  • the user client can be a social network client, and can also be other clients with social attributes, such as a shopping client, a game client, a reading client, a client dedicated to sending recommendation information, and the like.
  • the payer terminal 120 when the payer terminal 120 delivers the recommendation information to the server cluster 140, the payer terminal 120 can specify an orientation label on the server cluster 140, and the server cluster 140 determines the target user client according to the orientation label, and then the server cluster 140 targets the target.
  • the user terminal 160 where the user client is located transmits the recommendation information.
  • FIG. 2 is a flowchart of a method for pushing information according to an embodiment of the present application.
  • This embodiment is exemplified by the information pushing method applied to the information recommendation system shown in FIG. 1.
  • the terminals in the following embodiments are all user terminals 160 in the information recommendation system.
  • the information push method includes:
  • Step 201 Acquire ambient audio data, where the ambient audio data is used to indicate a sound signal of a scene in which the terminal is located.
  • the scene detection function is enabled, and m kinds of sound signals of the scene where the terminal is located are collected in real time, and the environment audio data is generated according to the m kinds of sound signals.
  • m is a positive integer.
  • the preset control is a control provided on a main interface of the scene detection application in the terminal, or a control displayed after the floating window corresponding to the scene detection application is expanded.
  • the preset control is an actionable control for turning on the scene detection function.
  • the type of the preset control includes at least one of a button, a steerable item, and a slider. This embodiment does not limit the position and type of the preset control.
  • the preset triggering operation is a user operation for triggering the scene detection function corresponding to the preset control.
  • the preset triggering operation includes a combination of any one or more of a click operation, a slide operation, a press operation, and a long press operation.
  • the preset triggering operation also includes other possible implementations.
  • the preset triggering operation is implemented in a voice form.
  • the user inputs a voice signal corresponding to the preset control in a voice form in the terminal, and after the terminal obtains the voice signal, the terminal parses the voice signal to obtain the voice content, and the voice content has a preset information matching the preset control.
  • the terminal determines that the preset control is triggered, and starts the scene detection function.
  • the terminal collects m kinds of sound signals of the scene where the terminal is located in real time through the collection component.
  • the acquisition component is a voiceprint recognition sensor.
  • the terminal collects m kinds of sound signals of the scene where the terminal is located in real time through the acquisition component, and determines the collected m kinds of sound signals as environment audio data.
  • Step 202 Acquire a scene classification model, where the scene classification model is used to represent a scene classification rule obtained by training based on sample environment audio data.
  • the training process of the scene classification model can be completed by the terminal or by the server. Therefore, the terminal acquires the scene classification model, including but not limited to the following two possible acquisition methods:
  • the terminal acquires a scene classification model stored by itself.
  • the terminal acquires a scene classification model from the server.
  • the terminal sends an acquisition request to the server, where the acquisition request is used to instruct the server to obtain the stored scene classification model.
  • the server acquires and sends the scene classification model to the terminal according to the acquisition request.
  • the terminal receives a scene classification model sent by the server. This embodiment does not limit this.
  • the following is an example of taking the trained scene classification model as the first possible acquisition method.
  • the scene classification model is a model obtained by training a neural network using sample environment audio data.
  • the scene classification model is used to represent a correlation between the environment audio data and the target scene identifier.
  • the target scene identifier is used to indicate the scene type of the scene where the terminal is located. That is, the scene classification model is a neural network model having a scene type indicated by the environment audio data.
  • Step 203 Calculate the target scene identifier by using the scene classification model according to the environment audio data, where the target scene identifier is used to indicate the scene type of the scene where the terminal is located.
  • the terminal calculates the target scene identifier by using the scene classification model according to the environment audio data, and the method includes: the terminal inputs the environment audio data into the scene classification model, and outputs the target scene identifier.
  • the target scene identifier is used to indicate the scene type of the scene where the terminal is located at the current time, and the current time is the time at which the ambient audio data is acquired.
  • the scene identifier has a one-to-one correspondence with the scene type, that is, the scene identifier is used to uniquely identify the scene type in multiple scene types.
  • the division of multiple scene types includes but is not limited to the following possible divisions:
  • the scene type includes two types of indoor scenes and outdoor scenes.
  • the scene type includes three types of work area, home area, and entertainment area.
  • the scene type includes at least one of a restaurant, a transportation hub, a quiet area, and a tourist attraction.
  • the transportation hub includes at least one of a bus station, a subway station, a railway station, and an airport.
  • the quiet area includes at least one of a library, a museum, a hospital, and a court.
  • the number and type of the scene types are not limited. For convenience of description, only the scene type including at least one of a restaurant, a transportation hub, a quiet area, and a tourist attraction is taken as an example for description.
  • Step 204 Push the target recommendation information corresponding to the target scene identifier according to the first preset correspondence relationship, where the first preset correspondence relationship includes a correspondence between the scene identifier and the recommendation information.
  • the terminal pushes the target recommendation information corresponding to the target scenario identifier, including but not limited to the following possible implementation manners:
  • the terminal determines the target recommendation information corresponding to the target scenario identifier according to the first preset correspondence relationship stored by the terminal, and pushes the target recommendation information.
  • the terminal stores n recommendation information, and a first preset correspondence between the recommendation information and the scene identifier, where n is a positive integer.
  • the terminal after determining the target scenario identifier, sends the target scenario identifier to the server; correspondingly, the server receives the target scenario identifier.
  • the server determines the target recommendation information corresponding to the target scenario identifier according to the stored first preset correspondence, and feeds back the target recommendation information to the terminal.
  • the terminal receives the target recommendation information and displays the target recommendation information.
  • the server stores n recommendation information, and a first preset correspondence relationship between the scenario identifier and the recommendation information.
  • the target recommendation information is determined as the food information; or when the scene type indicated by the target scene identifier is a transportation hub, the target recommendation information is determined as traffic information, and the transportation hub includes At least one of a bus stop, a subway station, a train station, and an airport; or, when the type of the scene indicated by the target scene identifier is a quiet area, the target recommendation information is determined to be light music information, and the quiet area includes a library, a museum, and a hospital. And at least one of the courts; or, when the type of the scene indicated by the target scene identifier is a tourist attraction, determining the target recommendation information as the travel guide information.
  • the server configures a corresponding scenario identifier for each recommendation information in advance, and the first preset correspondence relationship between the scenario identifier and the recommendation information includes the following three possible correspondences:
  • the first possible correspondence is that there is a one-to-one correspondence between each scene identifier and the recommendation information.
  • the correspondence is shown in Table 1.
  • the scene identifier is "Scene ID 1", and the “Scenario ID 1” is used to indicate that the recommended information is "Recommended Information S1"; the scene ID is “Scene ID 2", and “Scene ID 2” is used for
  • the scene type is a traffic hub
  • the corresponding recommendation information is “Recommended Information S2”;
  • the scene identifier is “Scenario ID 3”, and “Scenario ID 3” is used to indicate that the scene type is a quiet area, and the corresponding recommendation information is “Recommended”.
  • the information is S3"; the scene identifier is "Scenario ID 4", and the "Scenario ID 4" is used to indicate that the scene type is a tourist attraction, and the corresponding recommendation information is "Recommendation Information S4".
  • each recommendation information has a corresponding relationship with multiple scene identifiers.
  • the correspondence is shown in Table 2.
  • the recommendation information is the recommendation information S1
  • the corresponding scene identifier includes the scene identifier 1 and the scene identifier 3
  • the scene identifier 1 is used to indicate the scene type as the restaurant
  • the scene identifier 3 is used to indicate the scene type.
  • the recommended information is the recommended information S2
  • the corresponding scene identifier includes the scene identifier 2 and the scene identifier 4
  • the scene identifier 2 is used to indicate that the scene type is a transportation hub and the scene identifier 4 Used to indicate that the scene type is a tourist attraction.
  • each scene identifier has a corresponding relationship with multiple recommendation information.
  • the correspondence is shown in Table 3.
  • the corresponding recommendation information includes "Recommended Information S1", “Recommended Information S2”, and “Recommended Information S3”;
  • the corresponding recommendation information includes "Recommendation” Information S4", "Recommended Information S5", “Recommended Information S6”, and "Recommended Information S7”.
  • the server determines, according to the first preset correspondence, the target recommendation information corresponding to the target scenario identifier, including: according to the first preset correspondence Determining a plurality of recommendation information corresponding to the target scene identifier, and randomly determining at least one recommendation information as the target recommendation information among the plurality of recommendation information.
  • the number of the target recommendation information may be one or at least two, which is not limited in this embodiment.
  • the terminal After receiving the target recommendation information corresponding to the target scene identifier fed back by the server, the terminal displays the target recommendation information according to the preset display policy.
  • the preset display strategy refer to the related description in the following embodiments, which will not be introduced here.
  • the embodiment of the present application calculates the target scene identifier by using the scene classification model according to the obtained environment audio data, and the target scene identifier is used to indicate the scene type of the scene where the terminal is located; and according to the first preset correspondence, push Target recommendation information corresponding to the target scene identifier; the target recommendation information is determined according to the target scene identifier, that is, the target recommendation information pushed by the terminal conforms to the scene type of the current scene of the terminal, and satisfies the personalized requirement of the user, thereby improving the target
  • the effect of recommendation information is saved, which saves computing resources and resources on the information recommendation platform.
  • the training process of the scene classification model includes: acquiring a training sample set, where the training sample set includes at least one set of sample data sets; and according to at least one set of sample data sets, using an error back propagation algorithm to train the original parameter model, Scene classification model.
  • Each set of sample data sets includes: sample environment audio data and pre-labeled correct scene identification.
  • the terminal trains the original parameter model by using an error back propagation algorithm according to at least one set of sample data sets, and obtains a scene classification model, including but not limited to the following steps:
  • the terminal calculates the feature vector by using the feature extraction algorithm according to the sample environment audio data, and determines the calculated feature vector as the sample audio feature.
  • the terminal calculates the feature vector by using the feature extraction algorithm according to the sample environment audio data, including: performing preprocessing and feature extraction on the collected sample environment audio data, and determining the feature extracted data as the feature vector.
  • Preprocessing is the process of processing the sample environment audio data collected by the acquisition component to obtain sample audio features in the form of semi-structured data.
  • the preprocessing includes steps of information compression, noise reduction, and data normalization.
  • Feature extraction is the process of extracting some features from pre-processed sample audio features and converting some features into structured data.
  • the original parameter model is established according to a neural network model, for example, the original parameter model is established according to a DNN model or an RNN model.
  • the terminal creates input and output pairs corresponding to the set of sample data sets, and the input parameters of the input and output pairs are sample audio features in the set of sample data sets, and the output parameters are the set of sample data sets.
  • the correct scene identifier in the middle; the terminal inputs the input parameters into the prediction model to obtain the training result.
  • the sample data set includes the sample audio feature A and the correct scene identifier "Scene ID 1", and the input and output pairs created by the terminal are: (sample audio feature A) -> (scene identifier 1); wherein, (sample audio feature A) For the input parameter, (scene ID 1) is the output parameter.
  • the input and output pairs are represented by a feature vector.
  • the training result is compared with the correct scene identifier to obtain a calculation loss, and the calculation loss is used to indicate the error between the training result and the correct scene identifier.
  • the terminal calculates the calculated loss H(p, q) by the following formula:
  • p(x) and q(x) are discretely distributed vectors of equal length
  • p(x) represents the training result
  • q(x) represents the output parameter
  • x is a vector of the training result or the output parameter
  • the error classification algorithm is used to train the scene classification model.
  • the terminal determines a gradient direction of the scene classification model according to the calculation loss by using a back propagation algorithm, and updates the model parameters in the scene classification model layer by layer from the output layer of the scene classification model.
  • the process of the terminal training obtaining the scene classification model includes: acquiring, by the terminal, a training sample set, where the training sample set includes at least one set of sample data sets, each set of sample data sets including: sample environment audio data and Correct scene identification.
  • the terminal inputs the sample environment audio data to the original parameter model, outputs the training result, compares the training result with the correct scene identifier, and obtains the calculation loss, according to the corresponding calculation loss of at least one set of sample data groups.
  • the error classification algorithm is used to train the scene classification model. After training the obtained scene classification model, the terminal stores the trained scene classification model.
  • the terminal When the terminal starts the scene detection function, the terminal acquires the environment audio data, obtains the scene classification model obtained by the training, inputs the environment audio data into the scene classification model, outputs the target scene identifier, and pushes and targets the target scene according to the first preset correspondence relationship. Identify the corresponding target recommendation information.
  • the scenario classification model is obtained based on the above training.
  • FIG. 4 it is a flowchart of an information pushing method provided by an embodiment of the present application. This embodiment is exemplified by applying the information pushing method to the information recommendation system shown in FIG. 1.
  • the information push method includes:
  • step 401 the original recommendation information to be pushed is obtained, and the original recommendation information carries the original scene identifier.
  • the server sends the original recommendation information carrying the original scene identifier to the terminal.
  • the terminal receives the original recommendation information sent by the server, and extracts the original scene identifier from the original recommendation information.
  • the terminal receives the original recommendation information sent by the server in real time or every preset time period.
  • the preset time period is set by default or is user-defined. This embodiment does not limit this.
  • Step 402 Acquire ambient audio data, where the ambient audio data is used to indicate a sound signal of a scene in which the terminal is located.
  • the scenario detection function is enabled.
  • the terminal detects the preset trigger operation corresponding to the preset control the scene detection function is enabled.
  • the terminal collects m kinds of sound signals of the scene where the terminal is located through the collecting component, and generates environmental audio data according to the m kinds of sound signals.
  • Step 403 extracting audio features from the ambient audio data.
  • the terminal calculates the feature vector by using the feature extraction algorithm according to the collected ambient audio data, and determines the calculated feature vector as the audio feature.
  • the process of extracting audio features from the environment audio data refer to the process of extracting sample audio features from the sample environment audio data in the foregoing embodiment, and details are not described herein again.
  • Step 404 Acquire a scene classification model.
  • the scene classification model obtained by the above training is stored in the terminal, and the terminal acquires the stored scene classification model.
  • the scene classification model is trained according to at least one set of sample data sets, and each set of sample biometric data sets includes: sample environment audio data and a pre-labeled correct scene identifier.
  • Step 405 Input the audio feature into the scene classification model, and calculate the target scene identifier.
  • the terminal inputs the audio feature into the scene classification model to obtain the target scene identifier.
  • the terminal adds the environment audio data and the target scene identifier to the training sample set, obtains the updated training sample set, and trains the scene classification model according to the updated training sample set to obtain the updated scene classification model.
  • the process of training the scene classification model according to the updated training sample set, and the process of obtaining the updated scene classification model can be analogized with reference to the training process of the scene classification model in the foregoing embodiment, and details are not described herein again.
  • step 401 the process of obtaining the original recommendation information to be pushed in the foregoing step 401 and the process of calculating the target scene identifier in steps 402 to 405 may be performed in parallel, or step 402 to step 405 may be performed first, and then step 401 is performed. This embodiment does not limit this.
  • Step 406 When the original scene identifier does not match the target scene identifier, obtain the target recommendation information corresponding to the target scene identifier according to the first preset correspondence.
  • the terminal determines whether the original scene identifier matches the target scene identifier, and if the original scene identifier matches the target scene identifier, the original recommendation information is determined as the target recommendation information; if the original scene identifier does not match the target scene identifier, A preset correspondence relationship acquires target recommendation information corresponding to the target scene identifier.
  • the scene type usually includes an indoor scene and an outdoor scene.
  • the scene type indicated by the target scene identifier is an outdoor scene
  • the user who uses the terminal at the current moment is outside, and the interest in the recommended information tends to be popular. Higher. Therefore, in a possible implementation, the terminal determines the type of the scene indicated by the target scene identifier, and obtains the target recommendation information corresponding to the target scene identifier according to the first preset correspondence when the scene type is the outdoor scene.
  • the current positioning technology is usually based on the implementation of the location information of the terminal.
  • the current positioning technology can only locate a large area where the terminal is currently located, and cannot determine the specific location of the terminal in the area.
  • the current positioning technology can only locate the terminal in a certain mall, and can not determine which location of the terminal in the mall; for example, the current positioning technology can only locate the terminal in an office building, if it is necessary to determine the terminal is
  • the specific floor of the office building or the specific place of the specific floor needs to be further combined with height data or indoor positioning technology, and the calculation is very complicated.
  • the terminal pushes the target recommendation information corresponding to the target scenario identifier, including: acquiring real-time geographic location information of the terminal, where the real-time geographic location information is used to indicate that the terminal is currently located. a target area, the target area includes k candidate locations; determining a scene type indicated by the target scene identifier; determining a candidate location in the target area that matches the scene type as a designated location; and correspondingly corresponding to the second preset correspondence
  • the target recommendation information includes a correspondence between the candidate site and the recommendation information. Where k is a positive integer.
  • the terminal acquires real-time geographic location information of the terminal by using a Location Based Service (LBS) technology.
  • LBS Location Based Service
  • the terminal acquires real-time geographic location information of the user through a Global Positioning System (GPS), a positioning technology based on a wireless local area network or a mobile communication network.
  • GPS Global Positioning System
  • the target area includes k candidate locations; schematically, when the target area is an office building, the k candidate locations include at least one of an office, a conference room, a lounge, and a bathroom in each floor of the office building. .
  • the candidate location corresponding to the scene type is determined as the designated location among the k candidate locations, and the target recommendation information corresponding to the designated location is pushed according to the second preset correspondence relationship.
  • the terminal stores a correspondence between the candidate location and the recommendation information.
  • the correspondence between the candidate site and the recommendation information may be analogous to the first preset correspondence relationship, and details are not described herein again.
  • step 407 target recommendation information is displayed.
  • the terminal When the terminal obtains the target recommendation information corresponding to the target scene identifier, the terminal displays the target recommendation information.
  • the terminal before the terminal displays the target recommendation information, the terminal further includes: the terminal according to the third preset Corresponding relationship, determining a display frequency threshold corresponding to the target scene identifier, where the third preset correspondence relationship includes a correspondence between the scene identifier and the display frequency threshold; and when the display frequency is less than or equal to the display frequency threshold, performing display target recommendation information step.
  • the display frequency is the number of times the recommendation information is displayed in the first predetermined time period
  • the display frequency threshold is the maximum number of times the recommendation information is displayed in the first predetermined time period
  • the display frequency threshold is set by the terminal by default or is user-defined; for example, the first predetermined time period is 1 hour, and the display frequency threshold is 5 times/hour. This embodiment does not limit this.
  • the third preset correspondence between the scene identifier and the display frequency threshold is as shown in Table 4.
  • the corresponding display frequency threshold is "3 times/hour”
  • the corresponding display frequency threshold is "1 time/hour”
  • the scene identifier is "
  • the corresponding display frequency threshold is "2 times/hour”
  • the corresponding display frequency threshold is "5 times/hour”.
  • Scene identifier Display frequency threshold Scene ID 1 3 times / hour Scene ID 2 1 time / hour Scene ID 3 2 times / hour Scene ID 4 5 times / hour
  • the terminal acquires target recommendation information “recommendation information S1” corresponding to the target scene identifier “scene identifier 1”, and the terminal determines the target scene identifier “
  • the display frequency threshold corresponding to the scene identifier 1" is "3 times/hour", and when the display frequency is "2 times/hour", that is, the display frequency "2 times/hour” is less than or equal to the display frequency threshold, the display target is displayed.
  • Recommended information "Recommended Information S1”.
  • the original recommendation information may be replaced with the target recommendation information according to the foregoing method, and the target recommendation information may be displayed, or the original recommendation information may not be displayed, and the predetermined time period may be delayed.
  • the original recommendation information is displayed afterwards.
  • the predetermined time period is set by the terminal by default or is user-defined; for example, the predetermined time period is 60 minutes. This embodiment does not limit this.
  • the recommendation information is advertisement information
  • the terminal receives the advertisement information 50 to be pushed sent by the server, and extracts the scene identification 51 from the advertisement information 50.
  • the terminal collects various sound signals 52 of the location where the terminal is located through the built-in voiceprint recognition sensor, determines various sound signals 52 as ambient audio data 1, and extracts audio features from the ambient audio data 1. 1.
  • the audio feature 1 is input into the scene classification model, and the target scene identifier 53 is obtained. It is determined whether the scene identifier 51 matches the target scene identifier 53. If the scene identifier 51 matches the target scene identifier 53, the advertisement information to be pushed is displayed. If the scene identifier 51 does not match the target scene identifier 53, the advertisement information 54 corresponding to the target scene identifier 53 is displayed according to the first preset correspondence.
  • the environment audio data and the target scene identifier are added to the training sample set to obtain an updated training sample set, and the scene classification model is trained according to the updated training sample set, and the updated scene is obtained.
  • the classification model enables the terminal to continuously improve the accuracy of the scene classification model according to the new training sample, and improves the accuracy of determining the target scene identifier by the terminal.
  • the real-time geographic location information is used to obtain the real-time geographic location information of the terminal, where the real-time geographic location information is used to indicate the target area that the terminal is currently located, and the target area includes k candidate locations; and the scenario type indicated by the target scenario identifier is determined; Determining, in the target area, the candidate location that matches the scene type is the designated location; according to the second preset correspondence, pushing the target recommendation information corresponding to the specified location; avoiding the need of the LBS technology to combine the height data or the indoor in the related technology in the related technology
  • the positioning technology can perform the precise positioning, so that the terminal can determine the candidate location in the target area that matches the scene type as the designated location according to the type of the scene indicated by the target scene identifier, thereby improving the positioning accuracy and the positioning efficiency.
  • FIG. 6 is a schematic structural diagram of an information pushing apparatus according to an embodiment of the present application.
  • the information pushing device can be implemented as a whole or a part of the terminal in FIG. 1 by using a dedicated hardware circuit or a combination of software and hardware.
  • the information pushing device includes: a first obtaining module 610, a second obtaining module 620, a calculating module 620, and Push module 640.
  • the first obtaining module 610 is configured to acquire environment audio data, where the ambient audio data is used to indicate a sound signal of a scene where the terminal is located;
  • the second obtaining module 620 is configured to acquire a scene classification model, where the scene classification model is used to represent a scene classification rule obtained by training based on sample environment audio data;
  • the calculation module 630 is configured to calculate, according to the environment audio data, the target scene identifier by using the scene classification model, where the target scene identifier is used to indicate a scene type of the scene where the terminal is located;
  • the pushing module 640 is configured to: according to the first preset correspondence, push the target recommendation information corresponding to the target scene identifier, where the first preset correspondence relationship includes a correspondence between the scene identifier and the recommendation information.
  • the calculating module 630 includes: an extracting unit and a calculating unit.
  • An extracting unit configured to extract an audio feature from the ambient audio data
  • a calculating unit configured to input the audio feature into the scene classification model, and calculate the target scene identifier
  • the scene classification model is trained according to at least one set of sample data sets, and each set of sample data sets includes: sample environment audio data and a pre-labeled correct scene identifier.
  • the second obtaining module 620 includes: a first acquiring unit and a training unit.
  • a first acquiring unit configured to acquire a training sample set, where the training sample set includes at least one set of sample data sets, where each set of sample data sets includes: sample environment audio data and a pre-labeled correct scene identifier;
  • the training unit is configured to train the original parameter model by using an error back propagation algorithm according to at least one set of sample data sets to obtain a scene classification model.
  • the training unit is further configured to: extract sample audio features from the sample environment audio data for each set of sample data groups in the at least one set of sample data sets; input the sample audio features into the original parameter model to obtain training results; The training result is compared with the correct scene identifier to obtain a calculation loss, and the calculation loss is used to indicate the error between the training result and the correct scene identifier; and the error is calculated according to the calculation loss corresponding to at least one set of sample data groups, using the error back propagation algorithm.
  • Scene classification model is further configured to: extract sample audio features from the sample environment audio data for each set of sample data groups in the at least one set of sample data sets; input the sample audio features into the original parameter model to obtain training results; The training result is compared with the correct scene identifier to obtain a calculation loss, and the calculation loss is used to indicate the error between the training result and the correct scene identifier; and the error is calculated according to the calculation loss corresponding to at least one set of sample data groups, using the error back propagation algorithm.
  • Scene classification model
  • the device further includes: an update module.
  • an update module configured to add the environment audio data and the target scene identifier to the training sample set to obtain the updated training sample set; and train the scene classification model according to the updated training sample set to obtain the updated scene classification model.
  • the first obtaining module 610 includes: an opening unit, an acquiring unit, and a generating unit.
  • the opening unit is configured to enable the scene detecting function when the preset triggering operation corresponding to the preset control is detected;
  • the acquiring unit is configured to collect m kinds of sound signals of the scene where the terminal is located in real time, where m is a positive integer;
  • a generating unit configured to generate ambient audio data according to the m kinds of sound signals.
  • the pushing module 640 includes: a second acquiring unit, a third acquiring unit, and a display unit.
  • a second obtaining unit configured to obtain original recommendation information to be pushed, where the original recommendation information carries an original scene identifier
  • a third acquiring unit configured to acquire target recommendation information corresponding to the target scene identifier according to the first preset correspondence relationship when the original scene identifier does not match the target scene identifier;
  • a display unit for displaying target recommendation information.
  • the third obtaining unit is further configured to determine that the target recommendation information is the gourmet information when the scene type indicated by the target scene identifier is a restaurant; or
  • the transportation hub includes at least one of a bus station, a subway station, a railway station, and an airport; or
  • the scene type indicated by the target scene identifier is a quiet area
  • determining that the target recommendation information is light music information, and the quiet area includes at least one of a library, a museum, a hospital, and a court; or
  • the target recommendation information is determined as the travel guide information.
  • the pushing module 640 includes: a fourth acquiring unit, a first determining unit, a second determining unit, and a pushing unit.
  • a fourth acquiring unit configured to acquire real-time geographic location information of the terminal, where the real-time geographic location information is used to indicate a target area where the terminal is currently located, where the target area includes k candidate locations, where k is a positive integer;
  • a first determining unit configured to determine a scene type indicated by the target scene identifier
  • a second determining unit configured to determine that the candidate location in the target area that matches the scene type is the designated location
  • the pushing unit is configured to: push the target recommendation information corresponding to the specified location according to the second preset correspondence, where the second preset correspondence includes a correspondence between the candidate location and the recommendation information.
  • the display module is configured to not display the original recommendation information when the original scene identifier does not match the target scene identifier; or display the original recommendation information after delaying the predetermined period of time.
  • the first obtaining module 610 and the second obtaining module 620 are further configured to implement any other implicit or disclosed function related to the obtaining step in the foregoing method embodiment; the calculating module 630 is further configured to implement any other in the foregoing method embodiment. Implicit or open functions related to the computing steps; the push module 640 is also used to implement any other implicit or publicly related functions related to the push step in the above method embodiments.
  • the present application further provides a computer readable medium having program instructions stored thereon, and when the program instructions are executed by the processor, the information pushing method provided by the foregoing method embodiments is implemented.
  • the present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the information push method described in the various embodiments above.
  • FIG. 7 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • the terminal is the user terminal 160 in FIG.
  • the terminal can include one or more of the following components: a processor 710 and a memory 720.
  • Processor 710 can include one or more processing cores.
  • the processor 710 connects various portions of the entire elevator dispatch device using various interfaces and lines, by executing or executing instructions, programs, code sets or sets of instructions stored in the memory 720, and invoking data stored in the memory 720, Various functions and processing data of elevator dispatching equipment.
  • the processor 710 can use at least one of a digital signal processing (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA).
  • DSP digital signal processing
  • FPGA field-programmable gate array
  • PDA programmable logic array
  • a form of hardware is implemented.
  • the processor 710 can integrate one or a combination of a central processing unit (CPU), a modem, and the like. Among them, the CPU mainly processes operating systems and applications, etc.; the modem is used to handle wireless communication. It can be understood that the above modem may also be integrated into the processor 710 and implemented by a single chip.
  • the processor 710 executes the program instructions in the memory 720, the information pushing method provided by the foregoing various method embodiments is implemented.
  • the memory 720 may include a random access memory (RAM), and may also include a read-only memory (English: Read-Only Memory).
  • the memory 720 includes a non-transitory computer-readable storage medium.
  • Memory 720 can be used to store instructions, programs, code, code sets, or sets of instructions.
  • the memory 720 can include a memory program area and a memory data area, wherein the memory program area can store instructions for implementing an operating system, instructions for at least one function, instructions for implementing the various method embodiments described above, and the like.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请公开了一种信息推送方法、装置、终端及存储介质,属于终端技术领域。所述方法包括:获取环境音频数据,环境音频数据用于指示终端所处场景的声音信号;获取场景分类模型,场景分类模型用于表示基于样本环境音频数据进行训练得到的场景分类规律;根据环境音频数据,采用场景分类模型计算得到目标场景标识,目标场景标识用于指示终端所处场景的场景类型;根据第一预设对应关系,推送与目标场景标识对应的目标推荐信息。本申请实施例通过根据目标场景标识确定目标推荐信息,即终端推送的目标推荐信息符合终端当前所处场景的场景类型,满足了用户的个性化需求,进而提高了推荐信息的投放效果。

Description

信息推送方法、装置、终端及存储介质
本申请实施例要求于2017年12月29日提交的申请号为201711470476.1、发明名称为“信息推送方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请实施例中。
技术领域
本申请实施例涉及终端技术领域,特别涉及一种信息推送方法、装置、终端及存储介质。
背景技术
信息推送是指将推荐消息推送给目标用户群的过程。
目前,在向某个终端推送信息时,服务器会先获取该终端的用户数据,用户数据包括用户属性信息和用户行为数据,基于用户数据筛选出与该用户数据匹配的推荐消息,并将该推荐消息推送至终端;对应的,终端接收并展示该推荐消息。
发明内容
本申请实施例提供了一种信息推送方法、装置、终端及存储介质,可以用于解决推荐信息的投放效果较低的问题。所述技术方案如下:
根据本申请实施例的一方面,提供了一种信息推送方法,所述方法包括:
获取环境音频数据,所述环境音频数据用于指示终端所处场景的声音信号;
获取场景分类模型,所述场景分类模型用于表示基于样本环境音频数据进行训练得到的场景分类规律;
根据所述环境音频数据,采用所述场景分类模型计算得到目标场景标识,所述目标场景标识用于指示所述终端所处场景的场景类型;
根据第一预设对应关系,推送与所述目标场景标识对应的目标推荐信息,所述第一预设对应关系包括场景标识与推荐信息之间的对应关系。
根据本申请实施例的另一方面,提供了一种信息推送装置,所述装置包括:
第一获取模块,用于获取环境音频数据,所述环境音频数据用于指示终端所处场景的声音信号;
第二获取模块,用于获取场景分类模型,所述场景分类模型用于表示基于样本环境音频数据进行训练得到的场景分类规律;
计算模块,用于根据所述环境音频数据,采用所述场景分类模型计算得到目标场景标识,所述目标场景标识用于指示所述终端所处场景的场景类型;
推送模块,用于根据第一预设对应关系,推送与所述目标场景标识对应的目标推荐信息,所述第一预设对应关系包括场景标识与推荐信息之间的对应关系。
根据本申请实施例的另一方面,提供了一种终端,所述终端包括处理器、与所述处理器相连的存储器,以及存储在所述存储器上的程序指令,所述处理器执行所述程序指令时实现如本申请第一方面及其可选实施例任一所述的信息推送方法。
根据本申请实施例的另一方面,提供了一种计算机可读存储介质,其上存储有程序指令,所述程序指令被处理器执行时实现如本申请第一方面及其可选实施例任一所述的信息推送方法。
附图说明
图1是本申请一个实施例提供的信息推荐系统的结构示意图;
图2是本申请一个实施例提供的信息推送方法的流程图;
图3是本申请一个实施例提供的信息推送方法涉及的原理示意图;
图4是本申请另一个实施例提供的信息推送方法的流程图;
图5是本申请另一个实施例提供的信息推送方法涉及的原理示意图;
图6是本申请一个实施例提供的信息推送装置的结构示意图;
图7是本申请一个示例性实施例提供的终端的结构方框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请涉及到的名词进行介绍。
场景分类模型:是一种用于根据输入的数据确定终端所处场景的场景标识的数学模型。
可选的,第一分数预测模型包括但不限于:深度神经网络(Deep Neural Network,DNN)模型、循环神经网络(Recurrent Neural Networks,RNN)模型、嵌入(embedding)模型、梯度提升决策树(Gradient Boosting Decision Tree,GBDT)模型、逻辑回归(Logistic Regression,LR)模型中的至少一种。
DNN模型是一种深度学习框架。DNN模型包括输入层、至少一层隐层(或称,中间层)和输出层。可选的,输入层、至少一层隐层(或称,中间层)和输出层均包括至少一个神经元,神经元用于对接收到的数据进行处理。可选的,不同层之间的神经元的数量可以相同;或者,也可以不同。
RNN模型是一种具有反馈结构的神经网络。在RNN模型中,神经元的输出可以在下一个时间戳直接作用到自身,即,第i层神经元在m时刻的输入,除了(i-1)层神经元在该时刻的输出外,还包括其自身在(m-1)时刻的输出。
embedding模型是基于实体和关系分布式向量表示,将每个三元组实例中的关系看作从实体头到实体尾的翻译。其中,三元组实例包括主体、关系、客体,三元组实例可以表示成(主体,关系,客体);主体为实体头,客体为实体尾。比如:小张的爸爸是大张,则通过三元组实例表示为(小张,爸爸,大张)。
GBDT模型是一种迭代的决策树算法,该算法由多棵决策树组成,所有树的结果累加起来作为最终结果。决策树的每个节点都会得到一个预测值,以年龄为例,预测值为属于年龄对应的节点的所有人年龄的平均值。
LR模型是指在线性回归的基础上,套用一个逻辑函数建立的模型。
在信息推送过程中,与推荐信息对应的终端所处的场景是会变化的。推送的推荐信息,并不一定适合终端当前所处场景的周围环境,使得推荐信息的投放效果较差,进而浪费了信息推荐平台上的计算资源和投放资源。为此,本申请实施例提供了一种基于环境音频数据确定终端所处场景的场景类型,从而推送符合该场景类型的推荐信息的方案。
本申请实施例提供了一种信息推送方法,该方法包括:
获取环境音频数据,环境音频数据用于指示终端所处场景的声音信号;
获取场景分类模型,场景分类模型用于表示基于样本环境音频数据进行训练得到的场景分类规律;
根据环境音频数据,采用场景分类模型计算得到目标场景标识,目标场景标识用于指示终端所处场景的场景类型;
根据第一预设对应关系,推送与目标场景标识对应的目标推荐信息,第一预设对应关系包括场景标识与推荐信息之间的对应关系。
可选的,根据环境音频数据,采用场景分类模型计算得到目标场景标识,包括:
从环境音频数据中提取音频特征;
将音频特征输入至场景分类模型中,计算得到目标场景标识;
其中,场景分类模型是根据至少一组样本数据组训练得到的,每组样本数据组包括:样本环境音频数据和预先标注的正确场景标识。
可选的,获取场景分类模型,包括:
获取训练样本集,训练样本集包括至少一组样本数据组,每组样本数据组包括:样本环境音频数据和预先标注的正确场景标识;
根据至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到场景分类模型。
可选的,根据至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到场景分类模型,包括:
对于至少一组样本数据组中的每组样本数据组,从样本环境音频数据中提取样本音频特征;
将样本音频特征输入原始参数模型,得到训练结果;
将训练结果与正确场景标识进行比较,得到计算损失,计算损失用于指示训练结果与正确场景标识之间的误差;
根据至少一组样本数据组各自对应的计算损失,采用误差反向传播算法训练得到场景分类模型。
可选的,根据环境音频数据,采用场景分类模型计算得到目标场景标识之后,还包括:
将环境音频数据和目标场景标识添加至训练样本集,得到更新后的训练样本集;
根据更新后的训练样本集对场景分类模型进行训练,得到更新后的场景分类模型。
可选的,获取环境音频数据,包括:
当检测到预设控件对应的预设触发操作时,开启场景检测功能;
实时采集终端所处场景的m种声音信号,m为正整数;
根据m种声音信号,生成环境音频数据。
可选的,根据第一预设对应关系,推送与目标场景标识对应的目标推荐信息,包括:
获取待推送的原始推荐信息,原始推荐信息携带有原始场景标识;
当原始场景标识与目标场景标识不匹配时,根据第一预设对应关系,获取与目标场景标识对应的目标推荐信息;
显示目标推荐信息。
可选的,根据第一预设对应关系,获取与目标场景标识对应的目标推荐信息,包括:
当目标场景标识所指示的场景类型为餐厅时,确定目标推荐信息为美食信息;或,
当目标场景标识所指示的场景类型为交通枢纽时,确定目标推荐信息为交通信息,交通枢纽包括公交站、地铁站、火车站和飞机场中的至少一种;或,
当目标场景标识所指示的场景类型为安静区域时,确定目标推荐信息为轻音乐信息,安静区域包括图书馆、博物馆、医院和法院中的至少一种;或,
当目标场景标识所指示的场景类型为旅游景区时,确定目标推荐信息为旅游攻略信息。
可选的,根据第一预设对应关系,推送与目标场景标识对应的目标推荐信息,包括:
获取终端的实时地理位置信息,实时地理位置信息用于指示终端当前所处的目标区域,目标区域包括k个候选场所,k为正整数;
确定目标场景标识所指示的场景类型;
确定目标区域中与场景类型匹配的候选场所为指定场所;
根据第二预设对应关系,推送与指定场所对应的目标推荐信息,第二预设对应关系包括候选场所与推荐信息之间的对应关系。
请参考图1,其示出了本申请一个实施例提供的信息推荐系统的结构示意图。该信息推荐系统包括投放者终端120、服务器集群140和至少一个用户终端160。
投放者终端120中运行有投放者客户端。投放者终端120可以是手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
投放者客户端是用于在信息推荐平台上投放推荐信息的软件客户端。信息推荐平台是用于将推荐信息定向投放到目标用户客户端上的平台。
可选的,推荐信息是广告信息、多媒体信息或咨询信息等具有推荐价值的信息。
投放者是在信息推荐平台上投放推荐信息的用户或组织。当推荐信息是广告信息时,投放者即为广告主。
投放者终端120与服务器集群140之间通过通信网络相连。可选的,通信网络是有线网络或无线网络。
服务器集群140是一台服务器,或者由若干台服务器,或者是一个虚拟化平台,或者是一个云计算服务中心。
可选的,服务器集群140包括用于实现信息推荐平台的服务器。其中,信息推荐平台包括:用于向用户终端160发送推荐信息的服务器。
服务器集群140与用户终端160之间通过通信网络相连。可选的,通信网络是有线网络或无线网络。
用户终端160中运行有用户客户端,用户客户端中登录有用户帐号。用户终端160也可以是手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。用户客户端可以是社交网络客户端,还可以是其它兼有社交属性的客户端,比如购物客户端、游戏客户端、阅读客户端、专用于发送推荐信息的客户端等等。
通常,投放者终端120向服务器集群140投放推荐信息时,投放者终端120可以在服务器集群140上指定定向标签,由服务器集群140根据定向标签确定出目标用户客户端,然后由服务器集群140向目标用户客户端所在的用户终端160发送推荐信息。
请参考图2,其示出了本申请一个实施例提供的信息推送方法的流程图。本实施例以该信息推送方法应用于图1所示出的信息推荐系统中来举例说明,为了方便介绍,下面实施例中的终端均为信息推荐系统中的用户终端160。该信息推送方法包括:
步骤201,获取环境音频数据,环境音频数据用于指示终端所处场景的声音信号。
当终端检测到预设控件对应的预设触发操作时,开启场景检测功能,实时采集终端所处场景的m种声音信号,根据m种声音信号,生成环境音频数据。其中,m为正整数。
其中,预设控件是终端中场景检测应用的主界面上提供的控件,或者,是场景检测应用对应的悬浮窗在展开后显示的控件。预设控件是用于开启场景检测功能的可操作控件。示意性的,预设控件的类型包括按钮、可操控的条目、滑块中的至少一种。本实施例对预设控件的位置和类型均不加以限定。
预设触发操作是用于触发开启预设控件对应的场景检测功能的用户操作。示意性的,预设触发操作包括点击操作、滑动操作、按压操作、长按操作中的任意一种或多种的组合。
可选的,预设触发操作还包括其它可能的实现方式。在一种可能的实现方式中,预设触发操作以语音形式实现。比如,用户在终端中以语音形式输入预设控件对应的语音信号,终端获取到语音信号之后,对该语音信号进行解析获取语音内容,当语音内容中存在与预设控件的预设信息相匹配的关键词时,即终端确定该预设控件被触发,开启场景检测功能。
可选的,在场景检测功能被开启时,终端通过采集组件实时采集终端所处场景的m种声音信号。比如,采集组件为声纹识别传感器。
终端通过采集组件实时采集终端所处场景的m种声音信号,将采集到的m种声音信号 确定为环境音频数据。
步骤202,获取场景分类模型,场景分类模型用于表示基于样本环境音频数据进行训练得到的场景分类规律。
由于场景分类模型的训练过程可以由终端完成,也可以由服务器完成,因此终端获取场景分类模型,包括但不限于以下两种可能的获取方式:
在一种可能的获取方式中,终端获取自身存储的场景分类模型。
在另一种可能的获取方式中,终端从服务器中获取场景分类模型。示意性的,终端向服务器发送获取请求,该获取请求用于指示服务器获取存储的场景分类模型,对应的,服务器根据获取请求获取并向终端发送场景分类模型。终端接收服务器发送的场景分类模型。本实施例对此不加以限定。
下面仅以终端获取训练好的场景分类模型为第一种可能的获取方式为例进行说明。
可选的,场景分类模型是采用样本环境音频数据对神经网络进行训练得到的模型。
可选的,场景分类模型用于表示环境音频数据与目标场景标识之间的相关关系。其中,目标场景标识用于指示终端所处场景的场景类型。即,场景分类模型是具有对环境音频数据所指示的场景类型进行识别的神经网络模型。
需要说明的是,场景分类模型的训练过程可参考下面实施例中的相关描述,在此先不介绍。
步骤203,根据环境音频数据,采用场景分类模型计算得到目标场景标识,目标场景标识用于指示终端所处场景的场景类型。
可选的,终端根据环境音频数据,采用场景分类模型计算得到目标场景标识,包括:终端将环境音频数据输入至场景分类模型中,输出得到目标场景标识。
目标场景标识用于指示终端在当前时刻所处场景的场景类型,当前时刻为获取到环境音频数据的时刻。
其中,场景标识与场景类型存在一一对应的关系,即场景标识用于在多个场景类型中唯一标识该场景类型。多个场景类型的划分方式包括但不限于以下几种可能的划分方式:
在一种可能的划分方式中,场景类型包括室内场景和室外场景这两种类型。
在另一种可能的划分方式中,场景类型包括工作区域、家庭区域和娱乐区域这三种类型。
在另一种可能的划分方式中,场景类型包括餐厅、交通枢纽、安静区域和旅游景区中的至少一种类型。其中,交通枢纽包括公交站、地铁站、火车站和飞机场中的至少一种。安静区域包括图书馆、博物馆、医院和法院中的至少一种。本实施例对场景类型的划分数量和种类不加以限定,为了方便描述,下面仅以场景类型包括餐厅、交通枢纽、安静区域和旅游景区中的至少一种类型为例进行说明。
步骤204,根据第一预设对应关系,推送与目标场景标识对应的目标推荐信息,第一预设对应关系包括场景标识与推荐信息之间的对应关系。
可选的,终端根据第一预设对应关系,推送与目标场景标识对应的目标推荐信息,包括但不限于以下几种可能的实现方式:
在一种可能的实现方式中,终端根据自身存储的第一预设对应关系,确定与目标场景标识对应的目标推荐信息,并推送该目标推荐信息。
可选的,终端中存储有n个推荐信息,以及推荐信息与场景标识之间的第一预设对应关系,n为正整数。
在另一种可能的实现方式中,终端在确定出目标场景标识后,向服务器发送该目标场景标识;对应的,服务器接收目标场景标识。服务器根据存储的第一预设对应关系,确定与该目标场景标识对应的目标推荐信息,向终端反馈该目标推荐信息。对应的,终端接收该目标推荐信息,并显示该目标推荐信息。
可选的,服务器中存储有n个推荐信息,以及场景标识与推荐信息之间的第一预设对 应关系。
比如,当目标场景标识所指示的场景类型为餐厅时,确定目标推荐信息为美食信息;或,当目标场景标识所指示的场景类型为交通枢纽时,确定目标推荐信息为交通信息,交通枢纽包括公交站、地铁站、火车站和飞机场中的至少一种;或,当目标场景标识所指示的场景类型为安静区域时,确定目标推荐信息为轻音乐信息,安静区域包括图书馆、博物馆、医院和法院中的至少一种;或,当目标场景标识所指示的场景类型为旅游景区时,确定目标推荐信息为旅游攻略信息。
下面仅以终端推送目标推荐信息为第二种可能的实现方式为例进行说明。
可选的,服务器预先为每个推荐信息配置对应的场景标识,场景标识与推荐信息之间存在第一预设对应关系,包括以下三种可能的对应关系:
第一种可能的对应关系为:每个场景标识与推荐信息存在一一对应关系。示意性的,该对应关系如表一所示。场景标识为“场景标识1”,“场景标识1”用于指示场景类型为餐厅时,对应的推荐信息为“推荐信息S1”;场景标识为“场景标识2”,“场景标识2”用于指示场景类型为交通枢纽时,对应的推荐信息为“推荐信息S2”;场景标识为“场景标识3”,“场景标识3”用于指示场景类型为安静区域时,对应的推荐信息为“推荐信息S3”;场景标识为“场景标识4”,“场景标识4”用于指示场景类型为旅游景区时,对应的推荐信息为“推荐信息S4”。
表一
场景标识 推荐信息
场景标识1 推荐信息S1
场景标识2 推荐信息S2
场景标识3 推荐信息S3
场景标识4 推荐信息S4
第二种可能的对应关系为:每个推荐信息与多个场景标识存在对应关系。示意性的,该对应关系如表二所示。推荐信息为“推荐信息S1”时,对应的场景标识包括“场景标识1”和“场景标识3”,“场景标识1”用于指示场景类型为餐厅,“场景标识3”用于指示场景类型为安静区域;推荐信息为“推荐信息S2”时,对应的场景标识包括“场景标识2”和“场景标识4”,“场景标识2”用于指示场景类型为交通枢纽,“场景标识4”用于指示场景类型为旅游景区。
表二
Figure PCTCN2018116602-appb-000001
第三种可能的对应关系为:每个场景标识与多个推荐信息存在对应关系。示意性的,该对应关系如表三所示。场景标识为“场景标识1”时,对应的推荐信息包括“推荐信息S1”、“推荐信息S2”和“推荐信息S3”;场景标识为“场景标识2”时,对应的推荐信息包括“推荐信息S4”、“推荐信息S5”、“推荐信息S6”和“推荐信息S7”。
表三
Figure PCTCN2018116602-appb-000002
Figure PCTCN2018116602-appb-000003
可选的,当第一预设对应关系为第三种可能的对应关系时,服务器根据第一预设对应关系,确定与目标场景标识对应的目标推荐信息,包括:根据第一预设对应关系,确定与目标场景标识对应的多个推荐信息,在多个推荐信息中随机将至少一个推荐信息确定为目标推荐信息。目标推荐信息的数量可以是一个或者是至少两个,本实施例对此不加以限定。
终端在接收到服务器反馈的与目标场景标识对应的目标推荐信息之后,按照预设显示策略显示该目标推荐信息。预设显示策略可参考下面实施例中的相关描述,在此先不介绍。
综上所述,本申请实施例通过根据获取的环境音频数据,采用场景分类模型计算得到目标场景标识,目标场景标识用于指示终端所处场景的场景类型;根据第一预设对应关系,推送与目标场景标识对应的目标推荐信息;使得目标推荐信息是根据目标场景标识确定的,即终端推送的目标推荐信息符合终端当前所处场景的场景类型,满足了用户的个性化需求,进而提高了推荐信息的投放效果,节省了信息推荐平台上的计算资源和投放资源。
需要说明的是,在终端获取场景分类模型之前,终端需要对场景分类模型进行训练。可选的,场景分类模型的训练过程包括:获取训练样本集,训练样本集包括至少一组样本数据组;根据至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到场景分类模型。
每组样本数据组包括:样本环境音频数据和预先标注的正确场景标识。
终端根据至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到场景分类模型,包括但不限于以下几个步骤:
1、对于至少一组样本数据组中的每组样本数据组,从样本环境音频数据中提取样本音频特征。
终端根据样本环境音频数据,采用特征提取算法计算得到特征向量,将计算得到的特征向量确定为样本音频特征。
可选的,终端根据样本环境音频数据,采用特征提取算法计算得到特征向量,包括:对采集到的样本环境音频数据进行预处理和特征提取,再将经过特征提取后的数据确定为特征向量。
预处理是将采集组件采集到的样本环境音频数据进行处理,得到半结构化数据形式的样本音频特征的过程。其中,预处理包括信息压缩、降噪和数据归一化等步骤。
特征提取是从预处理后的样本音频特征中提取部分特征,并将部分特征转换为结构化数据的过程。
2、将样本音频特征输入原始参数模型,得到训练结果。
可选的,原始参数模型是根据神经网络模型建立的,比如:原始参数模型是根据DNN模型或者RNN模型建立的。
示意性的,对于每组样本数据组,终端创建该组样本数据组对应的输入输出对,输入输出对的输入参数为该组样本数据组中的样本音频特征,输出参数为该组样本数据组中的正确场景标识;终端将输入参数输入预测模型,得到训练结果。
比如,样本数据组包括样本音频特征A和正确场景标识“场景标识1”,终端创建的输入输出对为:(样本音频特征A)->(场景标识1);其中,(样本音频特征A)为输入参数,(场景标识1)为输出参数。
可选的,输入输出对通过特征向量表示。
3、将训练结果与正确场景标识进行比较,得到计算损失,计算损失用于指示训练结果与正确场景标识之间的误差。
可选的,计算损失通过交叉熵(cross-entropy)来表示,
可选的,终端通过下述公式计算得到计算损失H(p,q):
Figure PCTCN2018116602-appb-000004
其中,p(x)和q(x)是长度相等的离散分布向量,p(x)表示表示训练结果;q(x)表示输出参数;x为训练结果或输出参数中的一个向量。
3、根据至少一组样本数据组各自对应的计算损失,采用误差反向传播算法训练得到场景分类模型。
可选的,终端通过反向传播算法根据计算损失确定场景分类模型的梯度方向,从场景分类模型的输出层逐层向前更新场景分类模型中的模型参数。
示意性的,如图3所示,终端训练得到场景分类模型的过程包括:终端获取训练样本集,该训练样本集包括至少一组样本数据组,每组样本数据组包括:样本环境音频数据和正确场景标识。对于每组样本数据组,终端将样本环境音频数据输入至原始参数模型,输出得到训练结果,将训练结果与正确场景标识进行比较,得到计算损失,根据至少一组样本数据组各自对应的计算损失,采用误差反向传播算法训练得到场景分类模型。在训练得到的场景分类模型之后,终端将训练得到的场景分类模型进行存储。当终端开启场景检测功能时,终端获取环境音频数据,并获取训练得到的场景分类模型,将环境音频数据输入至场景分类模型,输出得到目标场景标识,根据第一预设对应关系推送与目标场景标识对应的目标推荐信息。
基于上述训练得到场景分类模型,请参考图4,其示出了本申请一个实施例提供的信息推送方法的流程图。本实施例以该信息推送方法应用于图1所示出的信息推荐系统中来举例说明。该信息推送方法包括:
步骤401,获取待推送的原始推荐信息,原始推荐信息携带有原始场景标识。
服务器向终端发送携带有原始场景标识的原始推荐信息,对应的,终端接收服务器发送的原始推荐信息,从原始推荐信息中提取原始场景标识。
可选的,终端实时或者每隔预设时间段接收服务器发送的原始推荐信息。预设时间段是默认设置的,或者是用户自定义设置的。本实施例对此不加以限定。
步骤402,获取环境音频数据,环境音频数据用于指示终端所处场景的声音信号。
可选的,当终端接收到服务器发送的原始推荐信息时,开启场景检测功能。或者,当终端检测到预设控件对应的预设触发操作时,开启场景检测功能。
终端通过采集组件采集终端所处场景的m种声音信号,根据m种声音信号生成环境音频数据。
需要说明的是,终端获取环境音频数据的过程可参考上述实施例中的相关细节,在此不再赘述。
步骤403,从环境音频数据中提取音频特征。
终端根据采集到的环境音频数据,采用特征提取算法计算得到特征向量,将计算得到的特征向量确定为音频特征。终端从环境音频数据中提取音频特征的过程可参考上述实施例中从样本环境音频数据中提取样本音频特征的过程,在此不再赘述。
步骤404,获取场景分类模型。
终端中存储有上述训练得到的场景分类模型,终端获取存储的场景分类模型。
其中,场景分类模型是根据至少一组样本数据组训练得到的,每组样本生物特征数据组包括:样本环境音频数据和预先标注的正确场景标识。
步骤405,将音频特征输入至场景分类模型中,计算得到目标场景标识。
终端将音频特征输入至场景分类模型中,得到目标场景标识。
可选的,终端将环境音频数据和目标场景标识添加至训练样本集,得到更新后的训练样本集,根据更新后的训练样本集对场景分类模型进行训练,得到更新后的场景分类模型。
其中,根据更新后的训练样本集对场景分类模型进行训练,得到更新后的场景分类模型的过程可类比参考上述实施例中场景分类模型的训练过程,在此不再赘述。
需要说明的是,上述步骤401即获取待推送的原始推荐信息的过程与步骤402至步骤405即计算得到目标场景标识的过程可以并列执行,也可以先执行步骤402至步骤405,再执行步骤401,本实施例对此不加以限定。
步骤406,当原始场景标识与目标场景标识不匹配时,根据第一预设对应关系,获取与目标场景标识对应的目标推荐信息。
终端判断原始场景标识与目标场景标识是否匹配,若原始场景标识与目标场景标识匹配,则将接收到的原始推荐信息确定为目标推荐信息;若原始场景标识与目标场景标识不匹配,则根据第一预设对应关系,获取与目标场景标识对应的目标推荐信息。
由于根据粗粒度划分,场景类型通常包括室内场景和室外场景,当目标场景标识所指示的场景类型为室外场景时,即表示在当前时刻使用该终端的用户处于室外,对推荐信息的兴趣倾向普遍较高。因此,在一种可能的实现方式中,终端确定目标场景标识所指示的场景类型,当场景类型为室外场景时根据第一预设对应关系,获取与目标场景标识对应的目标推荐信息。
目前的定位技术通常是基于终端的实施地理位置信息进行定位的,但是,目前的定位技术仅能够定位到该终端当前所处的一个较大范围的区域,无法确定终端在该区域中具体的场所;比如,目前的定位技术仅能够定位到终端在某个商场,无法确定终端在该商场的哪个场所;又比如,目前的定位技术仅能够定位到终端在某个办公楼,若需要确定终端在该办公楼的具体楼层或者具体楼层的具体场所,则还需要结合高度数据或者室内定位技术进一步进行定位,计算十分复杂。
为此,本申请实施例提供如下方法解决上述问题。在一种可能的实现方式中,终端根据第一预设对应关系,推送与目标场景标识对应的目标推荐信息,包括:获取终端的实时地理位置信息,实时地理位置信息用于指示终端当前所处的目标区域,目标区域包括k个候选场所;确定目标场景标识所指示的场景类型;确定目标区域中与场景类型匹配的候选场所为指定场所;根据第二预设对应关系,推送与指定场所对应的目标推荐信息,第二预设对应关系包括候选场所与推荐信息之间的对应关系。其中,k为正整数。
可选的,终端通过基于位置的服务(Location Based Service,LBS)技术获取终端的实时地理位置信息。比如,终端通过全球卫星定位系统(Global Positioning System,GPS)、基于无线局域网或者移动通信网的定位技术获取用户的实时地理位置信息。
可选的,目标区域包括k个候选场所;示意性的,当目标区域为办公楼时,k个候选场所包括该办公楼中每层的办公室、会议室、休息室和卫生间中的至少一种。
当终端获取到目标场景标识所指示的场景类型时,在k个候选场所中确定与该场景类型对应的候选场所作为指定场所,根据第二预设对应关系,推送与指定场所对应的目标推荐信息。
可选的,终端中存储有候选场所与推荐信息之间的对应关系。候选场所与推荐信息之间的对应关系可类比参考第一预设对应关系,在此不再赘述。
步骤407,显示目标推荐信息。
终端在获取到与目标场景标识对应的目标推荐信息时,显示该目标推荐信息。
由于用户对推荐信息的显示频率的接纳程度也与终端所处场景的场景类型有关,因此,在一种可能的实现方式中,在终端显示目标推荐信息之前,还包括:终端根据第三预设对应关系,确定与目标场景标识对应的显示频率阈值,第三预设对应关系包括场景标识与显示频率阈值之间的对应关系;当显示频率小于或等于显示频率阈值时,执行显示目标推荐信息的步骤。
其中,显示频率为在第一预定时间段内显示推荐信息的次数,显示频率阈值为在第一预定时间段内显示推荐信息的最大次数。
可选的,显示频率阈值是终端默认设置的或者是用户自定义设置的;比如,第一预定时间段为1小时,显示频率阈值为5次/小时。本实施例对此不加以限定。
示意性的,场景标识与显示频率阈值之间的第三预设对应关系如表四所示。场景标识为“场景标识1”时,对应的显示频率阈值为“3次/小时”;场景标识为“场景标识2”时,对应的显示频率阈值为“1次/小时”;场景标识为“场景标识3”时,对应的显示频率阈值为“2次/小时”;场景标识为“场景标识4”时,对应的显示频率阈值为“5次/小时”。
表四
场景标识 显示频率阈值
场景标识1 3次/小时
场景标识2 1次/小时
场景标识3 2次/小时
场景标识4 5次/小时
基于上述表四提供的第三预设对应关系,在一个示意性的例子中,终端获取与目标场景标识“场景标识1”对应的目标推荐信息“推荐信息S1”,终端确定与目标场景标识“场景标识1”对应的显示频率阈值为“3次/小时”,当显示频率为“2次/小时”时,即该显示频率“2次/小时”小于或等于该显示频率阈值时,显示目标推荐信息“推荐信息S1”。
可选的,当原始场景标识与目标场景标识不匹配时,可以按照上述方法将原始推荐信息更换为目标推荐信息,并显示目标推荐信息,也可以不显示原始推荐信息,还可以延迟预定时间段后显示原始推荐信息。其中,预定时间段是终端默认设置的或者是用户自定义设置的;比如,预定时间段为60分钟。本实施例对此不加以限定。
在一个示意性的例子中,如图5所示,推荐信息为广告信息,终端接收服务器发送的待推送的广告信息50,并从该广告信息50中提取场景标识51。终端在接收到广告信息50时,通过内置的声纹识别传感器采集终端所处场所的各种声音信号52,将各种声音信号52确定为环境音频数据1,从环境音频数据1中提取音频特征1,将音频特征1输入至场景分类模型中,得到目标场景标识53,判断场景标识51与目标场景标识53是否匹配,若场景标识51与目标场景标识53匹配,则显示待推送的广告信息50;若场景标识51与目标场景标识53不匹配,则根据第一预设对应关系,显示与目标场景标识53对应的广告信息54。
在本申请实施例中,还通过将环境音频数据和目标场景标识添加至训练样本集,得到更新后的训练样本集,根据更新后的训练样本集对场景分类模型进行训练,得到更新后的场景分类模型,使得终端可以根据新的训练样本不断提高场景分类模型的精度,提高了终端确定目标场景标识的准确性。
在本申请实施例中,还通过获取终端的实时地理位置信息,实时地理位置信息用于指示终端当前所处的目标区域,目标区域包括k个候选场所;确定目标场景标识所指示的场景类型;确定目标区域中与场景类型匹配的候选场所为指定场所;根据第二预设对应关系,推送与指定场所对应的目标推荐信息;避免了相关技术中在场景识别时需要LBS技术结合高度数据或室内定位技术才能进行精确定位的情况,使得终端根据目标场景标识所指示的场景类型,能够将目标区域中与场景类型匹配的候选场所确定为指定场所,提高了定位的准确性和定位效率。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图6,其示出了本申请一个实施例提供的信息推送装置的结构示意图。该信息推送装置可以通过专用硬件电路,或者,软硬件的结合实现成为图1中的终端的全部或一部分,该信息推送装置包括:第一获取模块610、第二获取模块620、计算模块620和推送模块640。
第一获取模块610,用于获取环境音频数据,所述环境音频数据用于指示终端所处场景的声音信号;
第二获取模块620,用于获取场景分类模型,所述场景分类模型用于表示基于样本环境音频数据进行训练得到的场景分类规律;
计算模块630,用于根据所述环境音频数据,采用所述场景分类模型计算得到目标场景标识,所述目标场景标识用于指示所述终端所处场景的场景类型;
推送模块640,用于根据第一预设对应关系,推送与所述目标场景标识对应的目标推荐信息,所述第一预设对应关系包括场景标识与推荐信息之间的对应关系。
可选的,计算模块630,包括:提取单元和计算单元。
提取单元,用于从环境音频数据中提取音频特征;
计算单元,用于将音频特征输入至场景分类模型中,计算得到目标场景标识;
其中,场景分类模型是根据至少一组样本数据组训练得到的,每组样本数据组包括:样本环境音频数据和预先标注的正确场景标识。
可选的,第二获取模块620,包括:第一获取单元和训练单元。
第一获取单元,用于获取训练样本集,训练样本集包括至少一组样本数据组,每组样本数据组包括:样本环境音频数据和预先标注的正确场景标识;
训练单元,用于根据至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到场景分类模型。
可选的,训练单元,还用于对于至少一组样本数据组中的每组样本数据组,从样本环境音频数据中提取样本音频特征;将样本音频特征输入原始参数模型,得到训练结果;将训练结果与正确场景标识进行比较,得到计算损失,计算损失用于指示训练结果与正确场景标识之间的误差;根据至少一组样本数据组各自对应的计算损失,采用误差反向传播算法训练得到场景分类模型。
可选的,该装置,还包括:更新模块。
更新模块,用于将环境音频数据和目标场景标识添加至训练样本集,得到更新后的训练样本集;根据更新后的训练样本集对场景分类模型进行训练,得到更新后的场景分类模型。
可选的,第一获取模块610,包括:开启单元、采集单元和生成单元。
开启单元,用于当检测到预设控件对应的预设触发操作时,开启场景检测功能;
采集单元,用于实时采集终端所处场景的m种声音信号,m为正整数;
生成单元,用于根据m种声音信号,生成环境音频数据。
可选的,推送模块640,包括:第二获取单元、第三获取单元和显示单元。
第二获取单元,用于获取待推送的原始推荐信息,原始推荐信息携带有原始场景标识;
第三获取单元,用于当原始场景标识与目标场景标识不匹配时,根据第一预设对应关系,获取与目标场景标识对应的目标推荐信息;
显示单元,用于显示目标推荐信息。
可选的,第三获取单元,还用于当目标场景标识所指示的场景类型为餐厅时,确定目标推荐信息为美食信息;或,
当目标场景标识所指示的场景类型为交通枢纽时,确定目标推荐信息为交通信息,交通枢纽包括公交站、地铁站、火车站和飞机场中的至少一种;或,
当目标场景标识所指示的场景类型为安静区域时,确定目标推荐信息为轻音乐信息,安静区域包括图书馆、博物馆、医院和法院中的至少一种;或,
当目标场景标识所指示的场景类型为旅游景区时,确定目标推荐信息为旅游攻略信息。
可选的,推送模块640,包括:第四获取单元、第一确定单元、第二确定单元和推送单元。
第四获取单元,用于获取终端的实时地理位置信息,实时地理位置信息用于指示终端当前所处的目标区域,目标区域包括k个候选场所,k为正整数;
第一确定单元,用于确定目标场景标识所指示的场景类型;
第二确定单元,用于确定目标区域中与场景类型匹配的候选场所为指定场所;
推送单元,用于根据第二预设对应关系,推送与指定场所对应的目标推荐信息,第二预设对应关系包括候选场所与推荐信息之间的对应关系。
该显示模块,用于当原始场景标识与目标场景标识不匹配时,不显示原始推荐信息;或者,延迟预定时间段后显示原始推荐信息。
相关细节可结合参考图2至图5所示的方法实施例。其中,第一获取模块610和第二获取模块620还用于实现上述方法实施例中其他任意隐含或公开的与获取步骤相关的功能;计算模块630还用于实现上述方法实施例中其他任意隐含或公开的与计算步骤相关的功能;推送模块640还用于实现上述方法实施例中其他任意隐含或公开的与推送步骤相关的功能。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本申请还提供一种计算机可读介质,其上存储有程序指令,程序指令被处理器执行时实现上述各个方法实施例提供的信息推送方法。
本申请还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各个实施例所述的信息推送方法。
请参考图7,其示出了本申请一个示例性实施例提供的终端的结构方框图。该终端为图1中的用户终端160。该终端可以包括一个或多个如下部件:处理器710和存储器720。
处理器710可以包括一个或者多个处理核心。处理器710利用各种接口和线路连接整个电梯调度设备内的各个部分,通过运行或执行存储在存储器720内的指令、程序、代码集或指令集,以及调用存储在存储器720内的数据,执行电梯调度设备的各种功能和处理数据。可选的,处理器710可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器710可集成中央处理器(Central Processing Unit,CPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统和应用程序等;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器710中,单独通过一块芯片进行实现。
可选的,处理器710执行存储器720中的程序指令时实现下上述各个方法实施例提供的信息推送方法。
存储器720可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(英文:Read-Only Memory)。可选的,该存储器720包括非瞬时性计算机可读介质(英文:non-transitory computer-readable storage medium)。存储器720可用于存储指令、程序、代码、代码集或指令集。存储器720可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于至少一个功能的指令、用于实现上述各个方法实施例的指令等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种信息推送方法,其特征在于,所述方法包括:
    获取环境音频数据,所述环境音频数据用于指示终端所处场景的声音信号;
    获取场景分类模型,所述场景分类模型用于表示基于样本环境音频数据进行训练得到的场景分类规律;
    根据所述环境音频数据,采用所述场景分类模型计算得到目标场景标识,所述目标场景标识用于指示所述终端所处场景的场景类型;
    根据第一预设对应关系,推送与所述目标场景标识对应的目标推荐信息,所述第一预设对应关系包括场景标识与推荐信息之间的对应关系。
  2. 根据权利要求1的方法,其特征在于,所述根据所述环境音频数据,采用所述场景分类模型计算得到目标场景标识,包括:
    从所述环境音频数据中提取音频特征;
    将所述音频特征输入至所述场景分类模型中,计算得到所述目标场景标识;
    其中,所述场景分类模型是根据至少一组样本数据组训练得到的,每组所述样本数据组包括:样本环境音频数据和预先标注的正确场景标识。
  3. 根据权利要求1的方法,其特征在于,所述获取所述场景分类模型,包括:
    获取训练样本集,训练样本集包括所述至少一组样本数据组,每组所述样本数据组包括:样本环境音频数据和预先标注的正确场景标识;
    根据所述至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到所述场景分类模型。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到所述场景分类模型,包括:
    对于所述至少一组样本数据组中的每组样本数据组,从所述样本环境音频数据中提取样本音频特征;
    将所述样本音频特征输入所述原始参数模型,得到训练结果;
    将所述训练结果与所述正确场景标识进行比较,得到计算损失,所述计算损失用于指示所述训练结果与所述正确场景标识之间的误差;
    根据所述至少一组样本数据组各自对应的计算损失,采用所述误差反向传播算法训练得到所述场景分类模型。
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述根据所述环境音频数据,采用所述场景分类模型计算得到目标场景标识之后,还包括:
    将所述环境音频数据和所述目标场景标识添加至所述训练样本集,得到更新后的训练样本集;
    根据所述更新后的训练样本集对所述场景分类模型进行训练,得到更新后的场景分类模型。
  6. 根据权利要求1至4任一所述的方法,其特征在于,所述获取环境音频数据,包括:
    当检测到预设控件对应的预设触发操作时,开启场景检测功能;
    实时采集所述终端所处场景的m种声音信号,所述m为正整数;
    根据所述m种声音信号,生成所述环境音频数据。
  7. 根据权利要求1至4任一所述的方法,其特征在于,所述根据第一预设对应关系,推送与所述目标场景标识对应的目标推荐信息,包括:
    获取待推送的原始推荐信息,所述原始推荐信息携带有原始场景标识;
    当所述原始场景标识与所述目标场景标识不匹配时,根据所述第一预设对应关系,获取与所述目标场景标识对应的目标推荐信息;
    显示所述目标推荐信息。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述第一预设对应关系,获取与所述目标场景标识对应的目标推荐信息,包括:
    当所述目标场景标识所指示的场景类型为餐厅时,确定所述目标推荐信息为美食信息;或,
    当所述目标场景标识所指示的场景类型为交通枢纽时,确定所述目标推荐信息为交通信息,所述交通枢纽包括公交站、地铁站、火车站和飞机场中的至少一种;或,
    当所述目标场景标识所指示的场景类型为安静区域时,确定所述目标推荐信息为轻音乐信息,所述安静区域包括图书馆、博物馆、医院和法院中的至少一种;或,
    当所述目标场景标识所指示的场景类型为旅游景区时,确定所述目标推荐信息为旅游攻略信息。
  9. 根据权利要求1所述的方法,其特征在于,所述根据第一预设对应关系,推送与所述目标场景标识对应的目标推荐信息,包括:
    获取所述终端的实时地理位置信息,所述实时地理位置信息用于指示所述终端当前所处的目标区域,所述目标区域包括k个候选场所,所述k为正整数;
    确定所述目标场景标识所指示的场景类型;
    确定所述目标区域中与所述场景类型匹配的候选场所为指定场所;
    根据第二预设对应关系,推送与所述指定场所对应的所述目标推荐信息,所述第二预设对应关系包括候选场所与推荐信息之间的对应关系。
  10. 一种信息推送装置,其特征在于,所述装置包括:
    第一获取模块,用于获取环境音频数据,所述环境音频数据用于指示终端所处场景的声音信号;
    第二获取模块,用于获取场景分类模型,所述场景分类模型用于表示基于样本环境音频数据进行训练得到的场景分类规律;
    计算模块,用于根据所述环境音频数据,采用所述场景分类模型计算得到目标场景标识,所述目标场景标识用于指示所述终端所处场景的场景类型;
    推送模块,用于根据第一预设对应关系,推送与所述目标场景标识对应的目标推荐信息,所述第一预设对应关系包括场景标识与推荐信息之间的对应关系。
  11. 根据权利要求10的装置,其特征在于,所述计算模块,包括:提取单元和计算单元;
    所述提取单元,用于从所述环境音频数据中提取音频特征;
    所述计算单元,用于将所述音频特征输入至所述场景分类模型中,计算得到所述目标场景标识;
    其中,所述场景分类模型是根据至少一组样本数据组训练得到的,每组所述样本数据组包括:样本环境音频数据和预先标注的正确场景标识。
  12. 根据权利要求10的装置,其特征在于,所述第二获取模块,包括:第一获取单元和 训练单元;
    所述第一获取单元,用于获取训练样本集,训练样本集包括所述至少一组样本数据组,每组所述样本数据组包括:样本环境音频数据和预先标注的正确场景标识;
    所述训练单元,用于根据所述至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到所述场景分类模型。
  13. 根据权利要求12所述的装置,其特征在于,所述训练单元,还用于:
    对于所述至少一组样本数据组中的每组样本数据组,从所述样本环境音频数据中提取样本音频特征;
    将所述样本音频特征输入所述原始参数模型,得到训练结果;
    将所述训练结果与所述正确场景标识进行比较,得到计算损失,所述计算损失用于指示所述训练结果与所述正确场景标识之间的误差;
    根据所述至少一组样本数据组各自对应的计算损失,采用所述误差反向传播算法训练得到所述场景分类模型。
  14. 根据权利要求10至13任一所述的装置,其特征在于,所述装置,还包括:更新模块;
    所述更新模块,用于将所述环境音频数据和所述目标场景标识添加至所述训练样本集,得到更新后的训练样本集;根据所述更新后的训练样本集对所述场景分类模型进行训练,得到更新后的场景分类模型。
  15. 根据权利要求10至13任一所述的装置,其特征在于,所述第一获取模块,包括:开启单元、采集单元和生成单元。
    所述开启单元,用于当检测到预设控件对应的预设触发操作时,开启场景检测功能;
    所述采集单元,用于实时采集所述终端所处场景的m种声音信号,所述m为正整数;
    所述生成单元,用于根据所述m种声音信号,生成所述环境音频数据。
  16. 根据权利要求10至13任一所述的装置,其特征在于,所述推送模块,包括:第二获取单元、第三获取单元和显示单元。
    所述第二获取单元,用于获取待推送的原始推荐信息,所述原始推荐信息携带有原始场景标识;
    所述第三获取单元,用于当所述原始场景标识与所述目标场景标识不匹配时,根据所述第一预设对应关系,获取与所述目标场景标识对应的目标推荐信息;
    所述显示单元,用于显示所述目标推荐信息。
  17. 根据权利要求16所述的装置,其特征在于,所述第三获取单元,还用于:
    当所述目标场景标识所指示的场景类型为餐厅时,确定所述目标推荐信息为美食信息;或,
    当所述目标场景标识所指示的场景类型为交通枢纽时,确定所述目标推荐信息为交通信息,所述交通枢纽包括公交站、地铁站、火车站和飞机场中的至少一种;或,
    当所述目标场景标识所指示的场景类型为安静区域时,确定所述目标推荐信息为轻音乐信息,所述安静区域包括图书馆、博物馆、医院和法院中的至少一种;或,
    当所述目标场景标识所指示的场景类型为旅游景区时,确定所述目标推荐信息为旅游攻略信息。
  18. 根据权利要求10所述的装置,其特征在于,所述推送模块,还包括:第四获取单元、 第一确定单元、第二确定单元和推送单元;
    所述第四获取单元,用于获取所述终端的实时地理位置信息,所述实时地理位置信息用于指示所述终端当前所处的目标区域,所述目标区域包括k个候选场所,所述k为正整数;
    所述第一确定单元,用于确定所述目标场景标识所指示的场景类型;
    所述第二确定单元,用于确定所述目标区域中与所述场景类型匹配的候选场所为指定场所;
    所述推送单元,用于根据第二预设对应关系,推送与所述指定场所对应的所述目标推荐信息,所述第二预设对应关系包括候选场所与推荐信息之间的对应关系。
  19. 一种终端,其特征在于,所述终端包括处理器、与所述处理器相连的存储器,以及存储在所述存储器上的程序指令,所述处理器执行所述程序指令时实现如权利要求1至9任一所述的信息推送方法。
  20. 一种计算机可读存储介质,其特征在于,其上存储有程序指令,所述程序指令被处理器执行时实现如权利要求1至9任一所述的信息推送方法。
PCT/CN2018/116602 2017-12-29 2018-11-21 信息推送方法、装置、终端及存储介质 WO2019128552A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711470476.1A CN109995799B (zh) 2017-12-29 2017-12-29 信息推送方法、装置、终端及存储介质
CN201711470476.1 2017-12-29

Publications (1)

Publication Number Publication Date
WO2019128552A1 true WO2019128552A1 (zh) 2019-07-04

Family

ID=67066487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116602 WO2019128552A1 (zh) 2017-12-29 2018-11-21 信息推送方法、装置、终端及存储介质

Country Status (2)

Country Link
CN (1) CN109995799B (zh)
WO (1) WO2019128552A1 (zh)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110442712A (zh) * 2019-07-05 2019-11-12 阿里巴巴集团控股有限公司 风险的确定方法、装置、服务器和文本审理系统
CN110929799A (zh) * 2019-11-29 2020-03-27 上海盛付通电子支付服务有限公司 用于检测异常用户的方法、电子设备和计算机可读介质
CN110991260A (zh) * 2019-11-12 2020-04-10 苏州智加科技有限公司 场景标注方法、装置、设备及存储介质
CN111079705A (zh) * 2019-12-31 2020-04-28 北京理工大学 一种振动信号分类方法
CN111428158A (zh) * 2020-04-09 2020-07-17 汉海信息技术(上海)有限公司 推荐位置的方法、装置、电子设备及可读存储介质
CN111460294A (zh) * 2020-03-31 2020-07-28 汉海信息技术(上海)有限公司 消息推送方法、装置、计算机设备及存储介质
CN111695622A (zh) * 2020-06-09 2020-09-22 全球能源互联网研究院有限公司 变电作业场景的标识模型训练方法、标识方法及装置
CN111832769A (zh) * 2019-09-24 2020-10-27 北京嘀嘀无限科技发展有限公司 上车点排序、信息排序方法及装置
CN111831931A (zh) * 2019-09-24 2020-10-27 北京嘀嘀无限科技发展有限公司 一种上车点排序、信息排序的方法及装置
CN111859133A (zh) * 2020-07-21 2020-10-30 有半岛(北京)信息科技有限公司 一种推荐方法及在线预测模型的发布方法和装置
CN111935231A (zh) * 2020-07-13 2020-11-13 支付宝(杭州)信息技术有限公司 信息处理方法和装置
CN111949886A (zh) * 2020-08-28 2020-11-17 腾讯科技(深圳)有限公司 一种用于信息推荐的样本数据生成方法和相关装置
CN112182282A (zh) * 2020-09-01 2021-01-05 浙江大华技术股份有限公司 音乐推荐方法、装置、计算机设备和可读存储介质
CN112200602A (zh) * 2020-09-21 2021-01-08 北京达佳互联信息技术有限公司 用于广告推荐的神经网络模型训练方法及装置
CN112232408A (zh) * 2020-10-15 2021-01-15 平安科技(深圳)有限公司 目标推荐方法、装置、电子设备及计算机可读存储介质
CN112447167A (zh) * 2020-11-17 2021-03-05 康键信息技术(深圳)有限公司 语音识别模型验证方法、装置、计算机设备和存储介质
CN112533137A (zh) * 2020-11-26 2021-03-19 北京爱笔科技有限公司 设备的定位方法、装置、电子设备及计算机存储介质
CN112698848A (zh) * 2020-12-31 2021-04-23 Oppo广东移动通信有限公司 机器学习模型的下载方法、装置、终端及存储介质
CN112751939A (zh) * 2020-12-31 2021-05-04 东风汽车有限公司 一种信息推送方法、信息推送装置和存储介质
CN112820273A (zh) * 2020-12-31 2021-05-18 青岛海尔科技有限公司 唤醒判别方法和装置、存储介质及电子设备
CN112992127A (zh) * 2019-12-12 2021-06-18 杭州海康威视数字技术股份有限公司 一种语音识别的方法和装置
CN113050961A (zh) * 2019-12-27 2021-06-29 Oppo广东移动通信有限公司 优化策略的推送方法、装置、服务器及存储介质
CN113204654A (zh) * 2021-04-21 2021-08-03 北京达佳互联信息技术有限公司 数据推荐方法、装置、服务器及存储介质
CN113312542A (zh) * 2020-02-26 2021-08-27 阿里巴巴集团控股有限公司 一种对象推荐模型的处理方法、装置以及电子设备
CN113670300A (zh) * 2019-08-28 2021-11-19 爱笔(北京)智能科技有限公司 一种slam系统的回环检测方法及装置
CN113763111A (zh) * 2021-02-10 2021-12-07 北京沃东天骏信息技术有限公司 物品搭配方法、装置及存储介质
CN113946222A (zh) * 2021-11-17 2022-01-18 杭州逗酷软件科技有限公司 一种控制方法、电子设备及计算机存储介质
CN114140140A (zh) * 2020-09-03 2022-03-04 中国移动通信集团浙江有限公司 一种场景筛选方法、装置及设备
CN114329051A (zh) * 2021-12-31 2022-04-12 腾讯科技(深圳)有限公司 数据信息识别方法、装置、设备、存储介质及程序产品
CN116761114A (zh) * 2023-07-14 2023-09-15 润芯微科技(江苏)有限公司 一种车载音响播放声音调节方法及其系统

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533634A (zh) * 2019-07-26 2019-12-03 深圳壹账通智能科技有限公司 基于人工智能的方案推荐方法、装置、计算机设备及存储介质
CN110598762A (zh) * 2019-08-26 2019-12-20 Oppo广东移动通信有限公司 基于音频的出行方式检测方法、装置以及移动终端
CN110516760A (zh) * 2019-09-02 2019-11-29 Oppo(重庆)智能科技有限公司 情境识别方法、装置、终端及计算机可读存储介质
CN111026979A (zh) * 2019-11-12 2020-04-17 恒大智慧科技有限公司 一种目标推荐方法及系统、计算机可读存储介质
CN112884423A (zh) * 2019-11-29 2021-06-01 北京国双科技有限公司 一种信息处理方法、装置、电子设备及存储介质
CN111177062B (zh) * 2019-12-02 2024-04-05 上海连尚网络科技有限公司 一种用于提供阅读呈现信息的方法与设备
CN111026371B (zh) * 2019-12-11 2023-09-29 上海米哈游网络科技股份有限公司 一种游戏开发方法、装置、电子设备及存储介质
CN111081275B (zh) * 2019-12-20 2023-05-26 惠州Tcl移动通信有限公司 基于声音分析的终端处理方法、装置、存储介质及终端
CN111259241B (zh) * 2020-01-14 2023-07-28 每日互动股份有限公司 一种信息处理方法及装置、存储介质
CN111259245B (zh) * 2020-01-16 2023-05-02 腾讯音乐娱乐科技(深圳)有限公司 作品推送方法、装置及存储介质
CN111724231A (zh) * 2020-05-19 2020-09-29 五八有限公司 一种商品信息的展示方法和装置
CN111949821A (zh) * 2020-06-24 2020-11-17 百度在线网络技术(北京)有限公司 视频推荐方法、装置、电子设备和存储介质
CN111831853A (zh) * 2020-07-16 2020-10-27 深圳市商汤科技有限公司 信息处理方法、装置、设备及系统
CN112000900A (zh) * 2020-08-14 2020-11-27 北京三快在线科技有限公司 推荐景点信息的方法、装置、电子设备及存储介质
CN113286742A (zh) * 2020-09-22 2021-08-20 深圳市大疆创新科技有限公司 可移动平台的数据管理方法、装置、可移动平台及介质
CN112422653A (zh) * 2020-11-06 2021-02-26 山东产研信息与人工智能融合研究院有限公司 基于位置服务的场景信息推送方法、系统、存储介质及设备
CN112435069A (zh) * 2020-12-02 2021-03-02 北京五八信息技术有限公司 一种广告投放方法、装置、电子设备及存储介质
CN115766934A (zh) * 2021-09-02 2023-03-07 北京小米移动软件有限公司 终端控制方法、装置、电子设备及存储介质
CN114177621B (zh) * 2021-12-15 2024-03-22 乐元素科技(北京)股份有限公司 数据处理方法及装置
CN117714295A (zh) * 2022-08-30 2024-03-15 中兴通讯股份有限公司 网络链路生成方法、服务器及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745384A (zh) * 2013-12-31 2014-04-23 北京百度网讯科技有限公司 一种用于向用户设备提供信息的方法及装置
CN104268154A (zh) * 2014-09-02 2015-01-07 百度在线网络技术(北京)有限公司 一种用于提供推荐信息的方法和装置
CN105302904A (zh) * 2015-10-29 2016-02-03 努比亚技术有限公司 信息推送方法和装置
CN106657300A (zh) * 2016-12-09 2017-05-10 捷开通讯(深圳)有限公司 一种应用程序的推送方法以及推送应用程序的移动终端

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102957742A (zh) * 2012-10-18 2013-03-06 北京天宇朗通通信设备股份有限公司 数据推送方法及装置
US20150120462A1 (en) * 2013-10-29 2015-04-30 Tencent Technology (Shenzhen) Company Limited Method And System For Pushing Merchandise Information
CN106682035A (zh) * 2015-11-11 2017-05-17 中国移动通信集团公司 一种个性化学习推荐方法及装置
CN106878359B (zh) * 2015-12-14 2020-08-04 百度在线网络技术(北京)有限公司 信息推送方法和装置
CN106777016B (zh) * 2016-12-08 2020-12-04 北京小米移动软件有限公司 基于即时通信进行信息推荐的方法及装置
CN107391605A (zh) * 2017-06-30 2017-11-24 北京奇虎科技有限公司 基于地理位置的信息推送方法、装置及移动终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745384A (zh) * 2013-12-31 2014-04-23 北京百度网讯科技有限公司 一种用于向用户设备提供信息的方法及装置
CN104268154A (zh) * 2014-09-02 2015-01-07 百度在线网络技术(北京)有限公司 一种用于提供推荐信息的方法和装置
CN105302904A (zh) * 2015-10-29 2016-02-03 努比亚技术有限公司 信息推送方法和装置
CN106657300A (zh) * 2016-12-09 2017-05-10 捷开通讯(深圳)有限公司 一种应用程序的推送方法以及推送应用程序的移动终端

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110442712A (zh) * 2019-07-05 2019-11-12 阿里巴巴集团控股有限公司 风险的确定方法、装置、服务器和文本审理系统
CN110442712B (zh) * 2019-07-05 2023-08-22 创新先进技术有限公司 风险的确定方法、装置、服务器和文本审理系统
CN113670300A (zh) * 2019-08-28 2021-11-19 爱笔(北京)智能科技有限公司 一种slam系统的回环检测方法及装置
CN111832769A (zh) * 2019-09-24 2020-10-27 北京嘀嘀无限科技发展有限公司 上车点排序、信息排序方法及装置
CN111831931A (zh) * 2019-09-24 2020-10-27 北京嘀嘀无限科技发展有限公司 一种上车点排序、信息排序的方法及装置
CN111831931B (zh) * 2019-09-24 2023-11-17 北京嘀嘀无限科技发展有限公司 一种上车点排序、信息排序的方法及装置
CN110991260A (zh) * 2019-11-12 2020-04-10 苏州智加科技有限公司 场景标注方法、装置、设备及存储介质
CN110991260B (zh) * 2019-11-12 2024-01-19 苏州智加科技有限公司 场景标注方法、装置、设备及存储介质
CN110929799A (zh) * 2019-11-29 2020-03-27 上海盛付通电子支付服务有限公司 用于检测异常用户的方法、电子设备和计算机可读介质
CN110929799B (zh) * 2019-11-29 2023-05-12 上海盛付通电子支付服务有限公司 用于检测异常用户的方法、电子设备和计算机可读介质
CN112992127B (zh) * 2019-12-12 2024-05-07 杭州海康威视数字技术股份有限公司 一种语音识别的方法和装置
CN112992127A (zh) * 2019-12-12 2021-06-18 杭州海康威视数字技术股份有限公司 一种语音识别的方法和装置
CN113050961A (zh) * 2019-12-27 2021-06-29 Oppo广东移动通信有限公司 优化策略的推送方法、装置、服务器及存储介质
CN111079705B (zh) * 2019-12-31 2023-07-25 北京理工大学 一种振动信号分类方法
CN111079705A (zh) * 2019-12-31 2020-04-28 北京理工大学 一种振动信号分类方法
CN113312542B (zh) * 2020-02-26 2023-12-22 阿里巴巴集团控股有限公司 一种对象推荐模型的处理方法、装置以及电子设备
CN113312542A (zh) * 2020-02-26 2021-08-27 阿里巴巴集团控股有限公司 一种对象推荐模型的处理方法、装置以及电子设备
CN111460294B (zh) * 2020-03-31 2023-09-15 汉海信息技术(上海)有限公司 消息推送方法、装置、计算机设备及存储介质
CN111460294A (zh) * 2020-03-31 2020-07-28 汉海信息技术(上海)有限公司 消息推送方法、装置、计算机设备及存储介质
CN111428158B (zh) * 2020-04-09 2023-04-18 汉海信息技术(上海)有限公司 推荐位置的方法、装置、电子设备及可读存储介质
CN111428158A (zh) * 2020-04-09 2020-07-17 汉海信息技术(上海)有限公司 推荐位置的方法、装置、电子设备及可读存储介质
CN111695622A (zh) * 2020-06-09 2020-09-22 全球能源互联网研究院有限公司 变电作业场景的标识模型训练方法、标识方法及装置
CN111695622B (zh) * 2020-06-09 2023-08-11 全球能源互联网研究院有限公司 变电作业场景的标识模型训练方法、标识方法及装置
CN111935231A (zh) * 2020-07-13 2020-11-13 支付宝(杭州)信息技术有限公司 信息处理方法和装置
CN111859133A (zh) * 2020-07-21 2020-10-30 有半岛(北京)信息科技有限公司 一种推荐方法及在线预测模型的发布方法和装置
CN111859133B (zh) * 2020-07-21 2023-11-14 有半岛(北京)信息科技有限公司 一种推荐方法及在线预测模型的发布方法和装置
CN111949886A (zh) * 2020-08-28 2020-11-17 腾讯科技(深圳)有限公司 一种用于信息推荐的样本数据生成方法和相关装置
CN111949886B (zh) * 2020-08-28 2023-11-24 腾讯科技(深圳)有限公司 一种用于信息推荐的样本数据生成方法和相关装置
CN112182282A (zh) * 2020-09-01 2021-01-05 浙江大华技术股份有限公司 音乐推荐方法、装置、计算机设备和可读存储介质
CN114140140A (zh) * 2020-09-03 2022-03-04 中国移动通信集团浙江有限公司 一种场景筛选方法、装置及设备
CN114140140B (zh) * 2020-09-03 2023-03-21 中国移动通信集团浙江有限公司 一种场景筛选方法、装置及设备
CN112200602B (zh) * 2020-09-21 2024-06-07 北京达佳互联信息技术有限公司 用于广告推荐的神经网络模型训练方法及装置
CN112200602A (zh) * 2020-09-21 2021-01-08 北京达佳互联信息技术有限公司 用于广告推荐的神经网络模型训练方法及装置
CN112232408A (zh) * 2020-10-15 2021-01-15 平安科技(深圳)有限公司 目标推荐方法、装置、电子设备及计算机可读存储介质
CN112447167A (zh) * 2020-11-17 2021-03-05 康键信息技术(深圳)有限公司 语音识别模型验证方法、装置、计算机设备和存储介质
CN112533137A (zh) * 2020-11-26 2021-03-19 北京爱笔科技有限公司 设备的定位方法、装置、电子设备及计算机存储介质
CN112533137B (zh) * 2020-11-26 2023-10-17 北京爱笔科技有限公司 设备的定位方法、装置、电子设备及计算机存储介质
CN112698848A (zh) * 2020-12-31 2021-04-23 Oppo广东移动通信有限公司 机器学习模型的下载方法、装置、终端及存储介质
CN112820273B (zh) * 2020-12-31 2022-12-02 青岛海尔科技有限公司 唤醒判别方法和装置、存储介质及电子设备
CN112751939A (zh) * 2020-12-31 2021-05-04 东风汽车有限公司 一种信息推送方法、信息推送装置和存储介质
CN112751939B (zh) * 2020-12-31 2024-04-12 东风汽车有限公司 一种信息推送方法、信息推送装置和存储介质
CN112820273A (zh) * 2020-12-31 2021-05-18 青岛海尔科技有限公司 唤醒判别方法和装置、存储介质及电子设备
CN113763111A (zh) * 2021-02-10 2021-12-07 北京沃东天骏信息技术有限公司 物品搭配方法、装置及存储介质
CN113204654A (zh) * 2021-04-21 2021-08-03 北京达佳互联信息技术有限公司 数据推荐方法、装置、服务器及存储介质
CN113204654B (zh) * 2021-04-21 2024-03-29 北京达佳互联信息技术有限公司 数据推荐方法、装置、服务器及存储介质
CN113946222A (zh) * 2021-11-17 2022-01-18 杭州逗酷软件科技有限公司 一种控制方法、电子设备及计算机存储介质
CN114329051A (zh) * 2021-12-31 2022-04-12 腾讯科技(深圳)有限公司 数据信息识别方法、装置、设备、存储介质及程序产品
CN114329051B (zh) * 2021-12-31 2024-03-05 腾讯科技(深圳)有限公司 数据信息识别方法、装置、设备、存储介质及程序产品
CN116761114B (zh) * 2023-07-14 2024-01-26 润芯微科技(江苏)有限公司 一种车载音响播放声音调节方法及其系统
CN116761114A (zh) * 2023-07-14 2023-09-15 润芯微科技(江苏)有限公司 一种车载音响播放声音调节方法及其系统

Also Published As

Publication number Publication date
CN109995799B (zh) 2020-12-29
CN109995799A (zh) 2019-07-09

Similar Documents

Publication Publication Date Title
WO2019128552A1 (zh) 信息推送方法、装置、终端及存储介质
US11286310B2 (en) Methods and apparatus for false positive minimization in facial recognition applications
US11544550B2 (en) Analyzing spatially-sparse data based on submanifold sparse convolutional neural networks
US10061985B2 (en) Video understanding platform
US20190332617A1 (en) Predicting Labels Using a Deep-Learning Model
US10003924B2 (en) Method of and server for processing wireless device sensor data to generate an entity vector associated with a physical location
CN107193792B (zh) 基于人工智能的生成文章的方法和装置
US20190340538A1 (en) Identifying entities using a deep-learning model
US20230237328A1 (en) Information processing method and terminal, and computer storage medium
US20180101540A1 (en) Diversifying Media Search Results on Online Social Networks
CN109643325B (zh) 在自动聊天中推荐朋友
WO2019144892A1 (zh) 数据处理方法、装置、存储介质和电子装置
CN110555714A (zh) 用于输出信息的方法和装置
KR20180137932A (ko) 소셜 네트워크 컨텐츠를 기반으로 단어 벡터화 기법을 이용하여 일상 언어로 확장하기 위한 방법 및 시스템
CN111309940A (zh) 一种信息展示方法、系统、装置、电子设备及存储介质
JP2020512759A (ja) ストーリー映像制作方法およびストーリー映像制作システム
US11615814B2 (en) Video automatic editing method and system based on machine learning
EP3188086A1 (en) Identifying entities using a deep-learning model
CN111259257A (zh) 一种信息展示方法、系统、装置、电子设备及存储介质
CN114282059A (zh) 视频检索的方法、装置、设备及存储介质
US20190080354A1 (en) Location prediction based on tag data
CN109885668A (zh) 一种可扩展的领域人机对话系统状态跟踪方法及设备
CN112861474B (zh) 一种信息标注方法、装置、设备及计算机可读存储介质
EP3306555A1 (en) Diversifying media search results on online social networks
CN111797765A (zh) 图像处理方法、装置、服务器及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18897057

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18897057

Country of ref document: EP

Kind code of ref document: A1