WO2015169056A1 - 信息呈现方法和设备 - Google Patents

信息呈现方法和设备 Download PDF

Info

Publication number
WO2015169056A1
WO2015169056A1 PCT/CN2014/088709 CN2014088709W WO2015169056A1 WO 2015169056 A1 WO2015169056 A1 WO 2015169056A1 CN 2014088709 W CN2014088709 W CN 2014088709W WO 2015169056 A1 WO2015169056 A1 WO 2015169056A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
scenario
preset
user
scene
Prior art date
Application number
PCT/CN2014/088709
Other languages
English (en)
French (fr)
Inventor
钱莉
张�杰
黄康敏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015169056A1 publication Critical patent/WO2015169056A1/zh
Priority to US15/330,850 priority Critical patent/US10291767B2/en
Priority to US16/404,449 priority patent/US11153430B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. SMS or e-mail

Definitions

  • the embodiments of the present invention relate to communication technologies, and in particular, to an information presentation method and device.
  • the mobile terminal or the wearable device usually presents all the received information to the user without filtering, causing the information to interfere with the user, and the rendering effect is not good.
  • the embodiment of the invention provides an information presentation method and device, which overcomes the problem that the information presentation effect of the prior art is not good and the user interference is large.
  • an embodiment of the present invention provides an information presentation method, including:
  • first information provided by the communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • the mobile terminal determines a presentation priority of the first information in a current context, including:
  • the mobile terminal acquires scene context information of the wearable device from the wearable device, where the scenario context information is used to determine a current context of the user;
  • the mobile terminal calculates a preset scenario that matches the current context according to the scenario context information, and calculates a correlation between the first information and the preset scenario.
  • the method further includes:
  • the scene context information of the wearable device is obtained again from the wearable device after waiting for a preset time.
  • the scenario context information is used to determine a current context of the user;
  • the mobile terminal calculates a preset scenario that matches the current context according to the scenario context information, and calculates a correlation between the first information and the preset scenario.
  • the first or second possible implementation manner of the first aspect, in a third possible implementation manner, before the mobile terminal determines that the first information is prioritized in a current context ,Also includes:
  • the mobile terminal establishes a scene model, where the scene model is used to determine a correlation between the first information and a preset scene, where the scene model includes at least three types of features: a basic scene feature, an information category feature, and a keyword feature. .
  • the determining, by the mobile terminal, the correlation between the first information and the preset scenario includes:
  • the mobile terminal parses the feature of the first information, and calculates a correlation between the first information and the preset scene according to the scenario model.
  • the mobile terminal establishes a scenario model, including:
  • the mobile terminal establishes the scene model according to the historical browsing record of the first information, where the history browsing record of the first information includes: recording time, receiving the first information The basic scene feature of the time, the information category feature of the first information, the keyword feature of the first information, and the reading action information of the user, wherein the history browsing records corresponding to different recording times have the same or different the weight of.
  • the first information in the current context has a presentation priority greater than the second preset
  • the sending the first information to the wearable device, after the wearable device presents the first information to the user further comprising:
  • the mobile terminal receives reading action information sent by the wearable device, and updates the scene model according to the reading action information.
  • an embodiment of the present invention provides an information presentation method, including:
  • first information provided by the communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • the mobile terminal determines a correlation between the first information and at least one preset scenario
  • the mobile terminal determines a presentation priority of the first information in a current context of the user
  • the method further includes:
  • the mobile terminal uses the first information as spam.
  • the mobile terminal determines a presentation priority of the first information in a current context, including:
  • the mobile terminal acquires scene context information of the wearable device from the wearable device, where the scenario context information is used to determine a current context of the user;
  • the mobile terminal calculates a similarity between the current context and each preset scenario according to the scenario context information
  • the mobile terminal calculates a presentation priority of the first information according to the similarity and a correlation between the first information and a preset scenario.
  • the mobile terminal calculates a presentation priority of the first information in a current context After that, it also includes:
  • the scene context information of the wearable device is obtained again from the wearable device after waiting for a preset time.
  • the scenario context information is used to determine a current context of the user;
  • the mobile terminal calculates a preset scenario that matches the current context according to the scenario context information, and calculates a correlation between the first information and the preset scenario.
  • the mobile terminal determines the first information and the at least one Before the relevance of the preset scene, it also includes:
  • the mobile terminal establishes a scene model, where the scene model is used to determine a correlation between the first information and a preset scene, where the scene model includes at least three types of features: a basic scene feature, an information category feature, and a keyword feature. .
  • the determining, by the mobile terminal, the correlation between the first information and the at least one preset scenario includes:
  • the mobile terminal parses the feature of the first information, and calculates a correlation between the first information and the at least one preset scenario according to the scenario model.
  • the mobile terminal establishes a scenario model, including:
  • the mobile terminal establishes the scene model according to the history browsing record of the first information, where the history browsing record of the first information includes: a recording time, a basic scene feature when receiving the first information, and the first The information category feature of the information, the keyword feature of the first information, and the reading action information of the user, wherein the historical browsing records corresponding to different recording times have the same or different weights.
  • the method when the first information in the current context has a presentation priority greater than the first The second information is sent to the wearable device, and after the wearable device presents the first information to the user, the method further includes:
  • the mobile terminal receives reading action information sent by the wearable device, and updates the scene model according to the reading action information.
  • an embodiment of the present invention provides an information presentation method, including:
  • the wearable device receives the first information that is sent by the mobile terminal after determining that the presentation priority of the current context is greater than the second preset value, and the first information includes any one of the following: text information, image information, audio information, and video information. ;
  • the wearable device presents the first information to a user.
  • the method further includes:
  • the wearable device captures the reading action information, and the reading action information includes at least: deleting the first information, reading the first information, reading the length of the first information, and forwarding the first information. .
  • the wearable device sends the read action information to the mobile terminal, so that the mobile terminal updates the scene model according to the read action information.
  • the method further includes:
  • the wearable device sends scene context information to the mobile terminal.
  • the wearable device presents the first information to a user, including:
  • the wearable device issues a prompt message.
  • an embodiment of the present invention provides an information presentation method, including:
  • the smart device receives first information provided by the communication network, and the first information includes any one of the following: text information, image information, audio information, and video information;
  • the smart device determines a correlation between the first information and at least one preset scenario
  • the smart device determines a presentation priority of the first information in a current context
  • the smart device When the presentation priority of the first information in the current context is greater than or equal to the second preset value, the smart device presents the first information to the user.
  • the method further includes:
  • the smart device uses the first information as spam.
  • the smart device determines a presentation priority of the first information in a current context, including:
  • the smart device acquires scene context information, where the scenario context information is used to determine a current context of the user;
  • the smart device calculates a similarity between the current context and each preset scenario according to the scenario context information
  • the smart device calculates a presentation priority of the first information according to the similarity and a correlation between the first information and a preset scenario.
  • the first or the second possible implementation manner of the fourth aspect, in a third possible implementation, after the smart device determines that the first information is prioritized in a current context ,Also includes:
  • the smart device calculates a similarity between the current context and each preset scenario according to the scenario context information
  • the smart device calculates a presentation priority of the first information according to the similarity and a correlation between the first information and a preset scenario.
  • the smart device determines the first information and the at least one Before the relevance of the preset scene, it also includes:
  • the smart device establishes a scene model, where the scene model is used to determine a correlation between the first information and a preset scene, where the scene model includes at least three types of features: a basic scene feature, an information category feature, and a keyword feature. .
  • the determining, by the smart device, the correlation between the first information and the at least one preset scenario includes:
  • the smart device parses the feature of the first information, and calculates a correlation between the first information and the at least one preset scenario according to the scenario model.
  • the smart device establishes a scenario model, including:
  • the smart device establishes the scene model according to the history browsing record of the first information, where the history browsing record of the first information includes: a recording time, a basic scene feature when receiving the first information, and the first The information category feature of the information, the keyword feature of the first information, and the reading action information of the user, wherein the historical browsing records corresponding to different recording times have the same or different weights.
  • the method when the first information in the current context has a presentation priority greater than the first After the preset value is presented to the user, the method further includes:
  • the smart device captures reading action information of the user, and updates the scene model according to the reading action information.
  • the smart device presents the first information to a user, including :
  • the smart device sends a prompt message.
  • an embodiment of the present invention provides an information presentation method, including:
  • the smart device receives first information provided by the communication network, and the first information includes any one of the following: text information, image information, audio information, and video information;
  • the smart device determines a presentation priority of the first information in a current context
  • the smart device When the presentation priority of the first information in the current context is greater than or equal to the second preset value, the smart device presents the first information to the user.
  • the smart device determines a presentation priority of the first information in a current context, including:
  • the smart device acquires scene context information, where the scenario context information is used to determine a current context of the user;
  • the smart device calculates a preset scenario that matches the current context according to the scenario context information, and calculates a correlation between the first information and the preset scenario.
  • the smart device determines a presentation priority of the first information in a current context according to the relevance.
  • the method further includes :
  • the smart device calculates a similarity between the current context and each preset scenario according to the scenario context information
  • the smart device calculates a presentation priority of the first information according to the similarity and a correlation between the first information and a preset scenario.
  • an information screening apparatus including:
  • a receiving module configured to receive first information provided by a communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • a processing module configured to determine a presentation priority of the first information in a current context of the user
  • a sending module configured to send the first information to the wearable device when the presentation priority of the first information in the current context is greater than or equal to a second preset value, so that the wearable device presents to the user The first information.
  • the processing module is specifically configured to:
  • scenario context information of the wearable device where the scenario context information is used to determine a current context of the user
  • the processing module is further configured to:
  • the scene context information of the wearable device is obtained again from the wearable device after waiting for a preset time.
  • the scenario context information is used to determine a current context of the user;
  • the processing module is further configured to:
  • the scene model is used to determine a correlation between the first information and a preset scene, and the scene model includes at least three types of features: a basic scene feature, an information category feature, and a keyword feature.
  • the processing module is specifically configured to:
  • the processing module is specifically configured to:
  • the history browsing record of the first information includes: a recording time, a basic scene feature when the first information is received, and an information category of the first information
  • the receiving module is further configured to:
  • an embodiment of the present invention provides an information screening apparatus, including:
  • a receiving module configured to receive first information provided by a communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • a processing module configured to determine a correlation between the first information and the at least one preset scene; when the correlation between the first information and the at least one preset scene is greater than or equal to a first preset value, determine Declaring the priority of the first information in the current context of the user;
  • a sending module configured to send the first information to the wearable device when the presentation priority of the first information in the current context is greater than or equal to a second preset value, so that the wearable device presents to the user The first information.
  • the processing module is further configured to:
  • the first information After determining the correlation between the first information and the at least one preset scene, when the correlation between the first information and all the preset scenes is less than the first preset value, the first information is used as the spam information. .
  • the processing module is specifically configured to:
  • scenario context information of the wearable device where the scenario context information is used to determine a current context of the user
  • the processing module is further configured to:
  • the wearable device After determining the presentation priority of the first information in the current context, when the presentation priority of the first information in the current context is less than the second preset value, waiting for the preset time and then again The wearable device acquires scene context information of the wearable device, where the scenario context information is used to determine a current context of the user;
  • any one of the first to the third possible implementation manners of the seventh aspect in a fourth possible implementation, is further configured to:
  • the scenario model is configured to determine a correlation between the first information and the preset scenario, where the scenario model includes at least 3 Class characteristics: basic scene features, information category features, keyword features.
  • the processing module is specifically configured to:
  • the processing module is specifically configured to:
  • the history browsing record of the first information includes: a recording time, a basic scene feature when the first information is received, and an information category of the first information
  • the receiving module is further configured to:
  • an information presentation apparatus including:
  • the receiving module is configured to receive first information that is sent by the mobile terminal after determining that the presentation priority of the current context is greater than the second preset value, where the first information includes any one of the following: text information, image information, and audio information. And video information;
  • a presentation module for presenting the first information to a user.
  • the method further includes:
  • the reading action information includes at least: whether to delete the first information, whether to read the first information, how long to read the first information, and whether to forward the first information.
  • a sending module configured to send the reading action information to the mobile terminal, so that the mobile terminal updates the scene model according to the reading action information.
  • the receiving module is further configured to receive a scenario context information request sent by the mobile terminal;
  • the sending module is further configured to send scenario context information to the mobile terminal.
  • the presentation module is specifically configured to:
  • a ninth aspect, an embodiment of the present invention provides an information presentation apparatus, including:
  • a receiving module configured to receive first information provided by a communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • a processing module configured to determine a correlation between the first information and at least one preset scenario
  • a presentation module configured to present the first information to a user when a presentation priority of the first information in a current context is greater than or equal to a second preset value.
  • the processing module is further configured to:
  • the first information After determining the correlation between the first information and the at least one preset scene, when the correlation between the first information and all the preset scenes is less than the first preset value, the first information is used as the spam information. .
  • the processing module is specifically configured to:
  • the processing module is specifically configured to:
  • the processing module is further configured to:
  • the scenario model is configured to determine a correlation between the first information and the preset scenario, where the scenario model includes at least 3 Class characteristics: basic scene features, information category features, keyword features.
  • the processing module is specifically configured to:
  • the processing module is specifically configured to:
  • the history browsing record of the first information includes: a recording time, a basic scene feature when the first information is received, and an information category of the first information
  • the method further includes:
  • a capturing module configured to: when the first information is presented in the current context, the priority is greater than the first After the preset value is presented to the user, the reading action information of the user is captured, and the scene model is updated according to the reading action information.
  • the presentation module is specifically configured to:
  • an embodiment of the present invention provides an information presentation apparatus, including:
  • a receiving module configured to receive first information provided by a communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • a processing module configured to determine a presentation priority of the first information in a current context
  • a presentation module configured to present the first information to a user when a presentation priority of the first information in a current context is greater than or equal to a second preset value.
  • the processing module is specifically configured to:
  • the processing module is specifically configured to:
  • the information presentation method and device provided by the embodiment of the present invention determine the presentation priority of the first information in the current context of the user after receiving the first information provided by the communication network, only when the first information is presented in the current context.
  • the priority is greater than or equal to the first preset value
  • the first information is presented to the user, that is, only information that is important or urgently related to the current situation of the user can be pushed or presented according to the current situation of the user.
  • the information of the closed information interferes with the user, and since the presented information is required by the user, the possibility that the user carefully reads the information can be improved, and the rendering effect can be improved.
  • the information presentation method and device provided by the embodiment of the present invention determine the correlation between the first information and the at least one preset scene after receiving the first information provided by the communication network, and only the first information and the first preset
  • the mobile terminal presents the first information to the user when determining that the user is in the first preset scenario, that is, according to the current situation of the user. Only push or present important urgent or information that is strongly related to the user's current situation, therefore, it can reduce the interference of the information unrelated to the current situation to the user, and since the presented information is required by the user, the user can be improved to read the information carefully. The possibility of improving the rendering effect.
  • Embodiment 1 is a signaling flowchart of Embodiment 1 of an information presentation method according to the present invention
  • Embodiment 2 is a signaling flowchart of Embodiment 2 of an information presentation method according to the present invention
  • Embodiment 3 is a flowchart of Embodiment 3 of an information presentation method according to the present invention.
  • FIG. 4 is a schematic diagram of an example of a user scene training model
  • FIG. 5 is a flowchart of Embodiment 4 of an information presentation method according to the present invention.
  • Embodiment 5 is a flowchart of Embodiment 5 of an information presentation method according to the present invention.
  • FIG. 8 is a schematic structural diagram of Embodiment 1 of an information screening apparatus according to the present invention.
  • Embodiment 9 is a schematic structural diagram of Embodiment 2 of an information screening apparatus according to the present invention.
  • FIG. 10 is a schematic structural diagram of Embodiment 1 of an information presentation apparatus according to the present invention.
  • Embodiment 11 is a schematic structural diagram of Embodiment 2 of an information presentation apparatus according to the present invention.
  • Embodiment 3 of an information presentation apparatus according to the present invention.
  • FIG. 13 is a schematic structural diagram of Embodiment 4 of an information presentation apparatus according to the present invention.
  • FIG. 14 is a schematic structural diagram of Embodiment 5 of an information presentation apparatus according to the present invention.
  • the present invention proposes an adaptive information push presentation method according to the user context, which is received by the mobile terminal or the smart device. After the information is obtained, according to the current situation of the user and the corresponding user scene model, information that is important urgent or strongly related to the current situation of the user is selected and presented.
  • the present invention can be implemented in two ways. The first one is that the mobile terminal and the wearable device cooperate, wherein the mobile terminal acts as an information anchor and scene analysis, and is responsible for forwarding and pushing information to the wearable device.
  • the terminal receives information from the network side, analyzes the information and the scene in which the user is located, and presents the information by the wearable device. Thereafter, the wearable device can also capture the user's reading action and feed back to the mobile terminal for the mobile terminal to analyze and update the user scene model.
  • the mobile terminal can be a mobile terminal, and the wearable device can be a smart watch, smart glasses, and the like.
  • the second is that a smart device separately receives information from the network side, analyzes the information and the scene in which the user is located, and presents the entire process.
  • the smart device in the second case may be a mobile terminal or a wearable device. The description will be separately made below.
  • the communication network in various embodiments of the present invention may be a cellular network, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), and Long Term Evolution (Long Term). Evolution, referred to as LTE), Code Division Multiple Access (CDMA), Wireless Local Area Networks (WLAN), Near Field Communication (NFC), etc.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • CDMA Code Division Multiple Access
  • WLAN Wireless Local Area Networks
  • NFC Near Field Communication
  • the mobile terminal in various embodiments of the present invention includes but is not limited to a mobile phone, a smart phone, and a flat
  • the wearable device in various embodiments of the present invention includes, but is not limited to, a smart watch, smart glasses, and the like.
  • FIG. 1 is a signaling flowchart of Embodiment 1 of an information presentation method according to the present invention.
  • This embodiment is jointly performed by a mobile terminal and a wearable device.
  • a user can have both a mobile terminal and a wearable device.
  • the processing capability of the mobile terminal is more powerful than that of the wearable device, and can be responsible for various task analysis and processing.
  • the wearable device has the most relationship with the user due to its wearing characteristics. Closely, it can be used to remind or present important urgent information content in real time.
  • the method in this embodiment may include:
  • Step 101 The mobile terminal receives the first information provided by the communication network.
  • the first information may be various information such as text information, images, audio and video information.
  • Step 102 The mobile terminal determines a presentation priority of the first information in a current context of the user.
  • the method for determining the presentation priority of the first information in the current context of the user in step 102 may include:
  • the mobile terminal acquires scene context information of the wearable device from the wearable device, where the scenario context information is used to determine a current context of the user; and the current context is similar to each preset scenario according to the scenario context information. And calculating a correlation between the first information and the preset scene; and then calculating, according to the similarity and the correlation between the first information and the preset scene, the first information in a current situation.
  • the mobile terminal may parse the received first information and extract related features.
  • the extracted features may include: the characteristics of the original author of the information; the social relationship between the original author and the user; the content characteristics, such as: what words are included in the content, the frequency of occurrence of the words, whether or not A keyword or symbol, the similarity between the favorite information in the user history and the information; global features, such as: how many users in the global scope also like the information, and how many other information contains links to the information.
  • the correlation between the first information and each preset scene is calculated according to a certain algorithm, or the preset scene corresponding to the first information is calculated, that is, the preset scene suitable for pushing the first information, that is, highly correlated (
  • the preset scene whose correlation is greater than the preset value is called the first preset. Scenes.
  • the preset scene may include, for example, work, home, on the road, etc., and the scene model may be used to define content features that the user has in the information corresponding to the specific scene.
  • the mobile terminal may calculate a correlation between the first information and at least one preset scenario according to a feature vector of the information and a scene model matrix, where the correlation may be a real number.
  • the mobile terminal may first establish a scene model.
  • the scene model may include at least three types of features: a basic scene feature, an information category feature, and a keyword feature, wherein the basic scene feature is used.
  • the information category feature is used to represent the user's interest category
  • the keyword feature is used to represent the user's specific interest point.
  • the basic scene features are, for example, time, geographical location, light, etc., which are used to represent specific scenes, and may generally be real-world scenes, such as: work, home, road, etc.; information category characteristics are: entertainment, sports, finance, etc. It is used to represent the user's interest category; the keyword feature is the keyword itself extracted from the information, and is used to indicate the user's more granular interest points, such as the 18th National Congress, the Olympic Games, and the like.
  • the specific parameters of the user scene model can be obtained by means of machine learning.
  • the user scene model describes the scores after the content features extracted by the information are mapped to the three dimensions of the basic scene feature, the information category feature, and the keyword feature, and are presented in a matrix form in a specific application process.
  • the scene model may be obtained according to a historical browsing record.
  • the mobile terminal establishes the scene model according to a historical browsing record, where the information in the historical browsing record has the same or different according to the browsing time.
  • the weight of For example, it can be set such that the history close to the current time has a higher weight, and the history far from the current time has a lower weight.
  • User scene model parameters can be obtained by machine learning.
  • the obtained historical information is expressed in a matrix form, and the records in each piece of information are one row in the matrix, including basic scene features, information category features, keyword features, information content features, and user pairs. The rating of this information.
  • the range of the user's evaluation score for the information is set, for example, to 1-5 points, and the score may be obtained according to the user's reading action implicit feedback.
  • the mapping relationship between the user reading action and the information evaluation score may be as follows:
  • Rough browsing for example, reading time is less than 1 minute
  • the mobile terminal can give the score of the user to each piece of information according to the above mapping relationship, and learn the user scene model corresponding to different scenes according to the above, for example, the i-th preset scene is Si, and the corresponding scene model is Ui. .
  • the mobile terminal may further determine whether the current context is a first preset scenario suitable for pushing the first information, for example, if the presentation priority of the first information in the current context is greater than or equal to the first preset value, Determined to be suitable for push.
  • Step 103 When the presentation priority of the first information in the current context is greater than or equal to a second preset value, send the first information to the wearable device, so that the wearable device presents the first a message.
  • the step 102 is performed again after waiting for the preset time.
  • the specific implementation process may be: the mobile terminal acquires scene context information of the wearable device from the wearable device; the mobile terminal calculates a preset scenario that matches the current context according to the scenario context information, and calculates the Correlation of the first information with the preset scenario; the mobile terminal determines a presentation priority of the first information in the current context according to the relevance.
  • the mobile terminal may acquire a parameter for determining a basic scene feature, and determine whether the current context is the first preset scenario, and when the user is determined to be in the first preset scenario, send the first information to the wearable device. .
  • the mobile terminal may also calculate the degree of association between the current situation of the user and the scene ⁇ Si ⁇ corresponding to each scene model ⁇ Ui ⁇ according to the basic scene feature, that is, the calculation in each dimension.
  • the first information corresponding to the set of scores ⁇ Vi ⁇ of each scene model ⁇ Ui ⁇ is calculated according to the set of similarity values ⁇ wi ⁇ , and the presentation priority of the first information is calculated, when the priority is presented.
  • the second preset value is greater than or equal to, the first information is sent to the wearable device.
  • the presentation priority of the first information may be calculated by using the following formula:
  • Vi is the score of the information in the scene model Ui; wi is the similarity value of the current situation of the user and the scene Si corresponding to the scene model Ui, and P is the presentation priority of the first information.
  • the mobile terminal may wait for a certain time and then perform the steps again. 102 and step 103 to find a suitable opportunity to present the first information.
  • Step 104 The wearable device presents the first information to a user.
  • the wearable device may display the first information on the interface, or may also emit a prompt sound or vibration, and/or other notification manners to remind the user to view.
  • the wearable device may further capture the user's reading action information, and send the user read action information to the mobile terminal.
  • the mobile terminal may parse the content feature of the information, and update the user scenario accordingly. model.
  • the common reading action of the user who needs to capture may be: whether the user sets the information as spam, whether the user clicks to read the information, the user roughly or carefully browses the information, and the user reads and forwards the information for a long time.
  • These actions can be used to obtain the user's rating of the information, so that the information is used as an updated corpus for updating the user scene model.
  • the keyword of the read content can be obtained, and the specific event of the user's attention content is accurately extracted, thereby updating Keyword characteristics of the user scene model.
  • the user scene model is updated, that is, the newly obtained user reading record is added to the training corpus, and the user scene matrix is recalculated.
  • the update learning of the model may include the following two methods: fixing the size of the training corpus, continuously adding new records, and deleting the oldest records. Record; or, assign different weights to each record in the old and new order of time.
  • the presentation priority of the first information in the current context is determined, and the first information is sent to the wearable device, so that the wearable device is The user presents the first information. That is, it is possible to push or present only important urgent or information related to the current situation of the user according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation on the user can be reduced, and since the presented information is required by the user It can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • the mobile terminal determines, in step 102, the presentation priority of the first information in the current context of the user, where a possible implementation manner is: the mobile terminal acquires scene context information from the wearable device; The information is used to calculate a preset scenario that matches the current context, that is, determining which preset scenario the current context belongs to, and calculating a correlation between the first information and the preset scenario, and determining, according to the correlation, the first information in the current scenario.
  • the presentation priority Specifically, the mapping relationship between the relevance of the first information and the preset scenario and the presentation priority of the first information in the current context may be determined in advance. In an embodiment, the first information and the preset scenario may be directly The relevance is the priority of the presentation of the first information in the current context.
  • the first information is the entertainment information
  • the mobile terminal determines that the current context is the conference scene according to the scenario context information, and it is foreseen that the mobile terminal calculates that the correlation between the first information and the preset scene is less than the first preset value.
  • the mobile terminal can determine that the presentation priority of the first information in the current context is also small, which is smaller than the second preset value, that is, the first information is not suitable for being pushed to the user in the current context. .
  • FIG. 2 is a signaling flowchart of Embodiment 2 of the information presentation method of the present invention.
  • This embodiment is jointly performed by a mobile terminal and a wearable device.
  • a user can have both a mobile terminal and a wearable device.
  • the processing capability of the mobile terminal is more powerful than that of the wearable device, and can be responsible for various task analysis and processing.
  • the wearable device has the most relationship with the user due to its wearing characteristics. Closely, it can be used to remind or present important urgent information content in real time.
  • This embodiment and Figure 1 The implementation process of the method of the illustrated embodiment is different. As shown in FIG. 2, the method in this embodiment may include:
  • Step 201 The mobile terminal receives the first information provided by the communication network.
  • the first information may be various information such as text information, images, audio and video information.
  • Step 202 The mobile terminal determines a relevance of the first information to at least one preset scene.
  • the mobile terminal may parse the received first information and extract related features.
  • the extracted features may include: the characteristics of the original author of the information; the social relationship between the original author and the user; the content characteristics, such as: what words are included in the content, the frequency of occurrence of the words, whether or not A keyword or symbol, the similarity between the favorite information in the user history and the information; global features, such as: how many users in the global scope also like the information, and how many other information contains links to the information.
  • the preset scene may include, for example, work, home, on the road, etc., and the scene model may be used to define content features that the user has in the information corresponding to the specific scene.
  • the degree of attention of the user in the corresponding preset scene that is, the degree of relevance of the first information to each preset scene, may be obtained.
  • the mobile terminal may calculate a correlation between the first information and at least one preset scenario according to a feature vector of the information and a scene model matrix, where the correlation may be a real number.
  • the mobile terminal Prior to this, the mobile terminal first establishes a scene model.
  • the scene model may include at least three types of features: a basic scene feature, an information category feature, and a keyword feature, wherein the basic scene feature is used for Representing a specific scenario, the information category feature is used to represent a user's interest category, and the keyword feature is used to represent a user's specific interest point.
  • the basic scene features are, for example, time, geographical location, light, etc., which are used to represent specific scenes, and may generally be real-world scenes, such as: work, home, road, etc.; information category characteristics are: entertainment, sports, finance, etc. Used to represent the user's interest category; the keyword feature is the keyword itself extracted from the information, used to represent the user's more granular interest. Points, for example: the 18th National Congress, the Olympic Games, etc.
  • the specific parameters of the user scene model can be obtained by means of machine learning.
  • the user scene model describes the scores after the content features extracted by the information are mapped to the three dimensions of the basic scene feature, the information category feature, and the keyword feature, and are presented in a matrix form in a specific application process.
  • the scene model may be obtained according to a historical browsing record of the first information.
  • the mobile terminal establishes the scene model according to a historical browsing record, where the history browsing record of the first information includes: recording time And a basic scene feature when the first information is received, an information category feature of the first information, a keyword feature of the first information, and reading action information of the user, where different recording times correspond to Historical browsing records have the same or different weights. For example, it can be set such that the history close to the current time has a higher weight, and the history far from the current time has a lower weight.
  • the obtained historical information is represented in a matrix form, and the records in each piece of information are one row in the matrix, including basic scene features, information category features, keyword features, information content features, and user ratings of the information.
  • the range of the user's evaluation score for the information is set, for example, to 1-5 points, and the score may be obtained according to the user's reading action implicit feedback.
  • the mapping relationship between the user reading action and the information evaluation score may be as follows:
  • Rough browsing for example, reading time is less than 1 minute
  • the mobile terminal can give the score of the user to each piece of information according to the above mapping relationship, and learn the user scene model corresponding to different scenes according to the above, for example, the i-th preset scene is Si, and the corresponding scene model is Ui. .
  • Step 203 If the correlation between the first information and the first preset scenario is greater than the first preset value, the mobile terminal sends the to the wearable device when determining that the user is in the first preset scenario. First information.
  • the first information may be deleted or filtered as the spam information.
  • the mobile terminal may acquire a parameter for determining a basic scene feature, and determine whether the current context is the first preset scenario. When it is determined that the user is in the first preset scenario, the first information is sent to the wearable device. .
  • the degree of association between the current situation of the user and the scene ⁇ Si ⁇ corresponding to each scene model ⁇ Ui ⁇ may be calculated, that is, the current situation of the user and each scene model are respectively calculated in each dimension ⁇ Ui ⁇
  • the first information corresponding to the set of scores ⁇ Vi ⁇ of each scene model ⁇ Ui ⁇ is calculated according to the set of similarity values ⁇ wi ⁇ , and the presentation priority of the first information is calculated, when the priority is presented.
  • the second preset value is greater than or equal to, the first information is sent to the wearable device.
  • the presentation priority of the first information may be calculated by using the following formula:
  • Vi is the score of the information in the scene model Ui; wi is the similarity value of the current situation of the user and the scene Si corresponding to the scene model Ui, and P is the presentation priority of the first information.
  • the mobile terminal may wait for a certain time and then perform the steps again. 203, to find a suitable opportunity to present the first information.
  • Step 204 The wearable device presents the first information to a user.
  • the wearable device may further capture the user's reading action information, and send the user read action information to the mobile terminal, and after receiving the mobile terminal, the content feature of the information may be parsed, and the user scene model is updated accordingly.
  • the common reading action of the user who needs to capture may be: whether the user sets the information as spam, whether the user clicks to read the information, the user roughly or carefully browses the information, and the user reads and forwards the information for a long time.
  • These actions can be used to obtain the user's rating of the information, so that the information is used as an updated corpus for updating the user scene model.
  • content that is of particular interest to users such as users reading for a long time And forwarding the content (the user implicitly scores 5 points)
  • the keywords of the reading content can be obtained, and specific events of the user's attention content are accurately extracted, thereby updating the keyword features of the user scene model.
  • the user scene model is updated, that is, the newly obtained user reading record is added to the training corpus, and the user scene matrix is recalculated.
  • the update learning of the model may include the following two methods: fixing the size of the training corpus, continuously adding new records, and deleting the oldest records; or assigning different weights to each record according to the old and new order of time.
  • the mobile terminal after the first information provided by the communication network is received by the mobile terminal, the correlation between the first information and the at least one preset scene is determined, and if the correlation between the first information and the first preset scene is greater than the first And a predetermined value, the mobile terminal sends the first information to the wearable device when the user is determined to be in the first preset scenario, so that the wearable device presents the first information to the user. That is, it is possible to push or present only important urgent or information related to the current situation of the user according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation on the user can be reduced, and since the presented information is required by the user It can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • FIG. 3 is a flowchart of Embodiment 3 of the information presentation method of the present invention.
  • the present embodiment is implemented by a mobile terminal and a wearable device, for example, a mobile terminal, and the wearable device is, for example, a smart watch.
  • Figure 3 only shows the steps performed by the mobile terminal.
  • This embodiment adds a process for the mobile terminal to establish a scene model based on the embodiment shown in FIG. 2.
  • the method in this embodiment may include:
  • Step 301 The mobile terminal establishes a scene model.
  • the scene model includes at least one scene and at least one piece of information, the scene model including at least three types of features: a basic scene feature, an information category feature, and a keyword feature, wherein the basic scene feature is used to represent a specific scenario, The information category feature is used to represent the user's interest category, and the keyword feature is used to represent the user's specific interest point.
  • the basic scene features are, for example, time, geographic location, light, etc., which are used to represent specific scenes, and may generally be real-world scenes, such as: work, home, road, etc.; information category characteristics are, for example, entertainment, sports, Finance and the like are used to indicate the user's interest category; the keyword feature is the keyword itself extracted from the information, and is used to indicate the user's more granular interest points, such as the 18th National Congress, the Olympic Games, and the like.
  • the specific parameters of the user scene model can be obtained by means of machine learning.
  • the user scene model describes the scores after the content features extracted by the information are mapped to the three dimensions of the basic scene feature, the information category feature, and the keyword feature, and are presented in a matrix form in a specific application process.
  • the scene model may be obtained according to a historical browsing record.
  • the mobile terminal establishes the scene model according to a historical browsing record, where the information in the historical browsing record has the same or different according to the browsing time.
  • the weight of For example, it can be set such that the history close to the current time has a higher weight, and the history far from the current time has a lower weight.
  • FIG. 4 is a schematic diagram of an example of a user scene training model.
  • the obtained historical information is represented as a matrix form as shown in FIG. 4, and the records in each piece of information are one row in the matrix, including basic scene features, information category features, keyword features, information content features, users. Rate this information.
  • the range of the user's evaluation score for the information is set, for example, to 1-5 points, and the score may be obtained according to the user's reading action implicit feedback.
  • the mapping relationship between the user reading action and the information evaluation score may be as follows:
  • Rough browsing for example, reading time is less than 1 minute
  • the mobile terminal can give the score of the user to each piece of information according to the above mapping relationship, and learn the user scene model corresponding to different scenes according to the above, for example, the i-th preset scene is Si, and the corresponding scene model is Ui. .
  • the matrix decomposition technique can be used to reduce the dimension of the matrix.
  • the singular value decomposition (SVD) technique can transform the high-dimensional matrix into a (scene-hidden state). Implicit state - information) The product of two matrices. Then, after the actual online application system obtains a new information, it still extracts its content features, converts it into a vector form, and calculates the scores for each scene separately. At the same time, calculate the context of the current situation Information that predicts the scene that best matches the current situation.
  • Step 302 The mobile terminal receives the first information provided by the communication network.
  • Step 303 The mobile terminal calculates, according to the feature vector of the information and the scene model matrix, the correlation between the first information and the at least one preset scene.
  • the scene model matrix is obtained according to the history browsing record of the first information, where the history browsing record of the first information includes: a recording time, a basic scene feature when the first information is received, and the first information.
  • the mobile terminal may separately calculate the relevance of the information corresponding to each scene model ⁇ Ui ⁇ , and the correlation may be a real number.
  • Step 304 If the mobile terminal determines that the correlation between the first information and all preset scenarios is less than the first preset value, the first information is used as spam.
  • the first preset value may be set to 3, that is, corresponding to the “rough browsing” in the user's reading action, that is, the information that the user does not browse at all is directly filtered as spam.
  • Step 305 If the mobile terminal determines that the correlation between the first information and the at least one preset scenario is greater than or equal to the first preset value, the mobile terminal sends a request for the scenario context information to the wearable device.
  • the preset scene that is related to the first information is greater than or equal to the first preset value, and the scene context information is used to determine whether the current context is the first scenario, that is, whether the first information is determined. Suitable for presentation in the current situation.
  • Step 306 The mobile terminal acquires scene context information from the wearable device.
  • the scenario context information is used to determine a current context of the user.
  • Step 307 The mobile terminal calculates a similarity between the current context and each preset scenario according to the scenario context information, and calculates a presentation priority of the first information.
  • Step 308 When the priority is greater than the second preset value, the mobile terminal sends the first information to the wearable device, so that the wearable device presents the first message to the user. interest.
  • the information may be determined to be strongly related to the current situation of the user, and the information is immediately pushed.
  • Step 309 When the priority is less than the second preset value, the mobile terminal waits for the preset time, and then performs steps 305 to 308 again.
  • the timer may be started, and after waiting for a certain period of time, step 305 to step 308 are performed, that is, the request for sending scene context information is sent to the wearable device, and the first information is newly determined to be presented in the current context of the user. priority.
  • the second preset value may be set to 3, that is, information that only the user carefully reads is information that is strongly related to the current situation of the user.
  • Step 310 The mobile terminal receives the reading action information sent by the wearable device, and updates the scene model according to the reading action information.
  • the user's latest interest can be obtained, and the matching of the information with the user scene is more accurate.
  • the mobile terminal establishes a scene model of the user, and determines whether the currently received information is suitable for being presented to the user, thereby implementing a method for pushing information according to the current context of the user, reducing harassment to the user, and improving the The validity of the information presentation; and by receiving the reading action information of the user and updating the scene model according to the reading action information, the accuracy of matching the information with the user scene is improved.
  • Embodiment 4 is a flowchart of Embodiment 4 of the information presentation method of the present invention.
  • the execution activity of the embodiment is a wearable device, and the method of this embodiment can be combined with the method performed by the mobile terminal shown in FIG. Complete the presentation of information for the user's scene.
  • the method in this embodiment may include:
  • Step 501 The wearable device receives the first information sent by the mobile terminal.
  • the first information is received by the mobile terminal from the network side, and is determined by the mobile terminal to be strongly related to the current situation of the user, that is, the mobile terminal determines that the presentity priority of the current situation is greater than the second preset value before the wearable device Send the first message.
  • the first information may include any one of the following: text information, image information, audio information, and video information.
  • Step 502 The wearable device presents the first information to a user.
  • the wearable device directly presents the first letter from the mobile terminal
  • the analysis process that is whether the first information is strongly related to the current situation is all completed by the mobile terminal, and the wearable device is only responsible for the presentation of the information.
  • the method may include:
  • Step 503 The wearable device captures the reading action information, and sends the reading action information to the mobile terminal, so that the mobile terminal updates the scene model according to the reading action information.
  • the reading action information includes at least: deleting the first information, reading the first information, reading the length of the first information, and forwarding the first information.
  • the first information sent by the mobile terminal is received, and the correlation between the first information and the current context of the user is determined.
  • the wearable device is The user presents the first information, and only pushes or presents important urgent or information related to the current situation of the user according to the current situation of the user, thereby reducing interference of the information unrelated to the current situation to the user, and The presented information is required by the user, and the possibility of the user reading the information carefully can be improved, and the rendering effect can be improved.
  • the wearable device may further receive a scenario context information request sent by the mobile terminal, and send scenario context information to the mobile terminal. Therefore, the mobile terminal can determine, according to the scene context information, which preset scene the current context belongs to, or the similarity between the current context and each preset scene.
  • FIG. 6 is a flowchart of Embodiment 5 of the information presentation method of the present invention.
  • an intelligent device separately completes the process of receiving information from the network side, analyzing the information and the scene where the user is located, and presenting the entire process.
  • the smart device in this embodiment may be a mobile terminal or a wearable device, for example, a method for separately performing information presentation by a mobile terminal or a smart watch. As shown in FIG. 6, the method in this embodiment may include:
  • Step 601 The smart device receives the first information provided by the communication network.
  • the first information may be various information such as text information, images, audio and video information.
  • Step 602 The smart device determines a correlation between the first information and at least one preset scenario.
  • the smart device may parse the received first information and extract related features.
  • the extracted features may include: the characteristics of the original author of the information; the social relationship between the original author and the user; the content characteristics, such as: what words are included in the content, the frequency of occurrence of the words, whether or not A keyword or symbol, the similarity between the favorite information in the user history and the information; global features, such as: how many users in the global scope also like the information, and how many other information contains links to the information.
  • the preset scene may include, for example, work, home, on the road, etc., and the scene model may be used to define content features that the user has in the information corresponding to the specific scene.
  • the degree of attention of the user in the corresponding preset scene that is, the degree of relevance of the first information to each preset scene, may be obtained.
  • the smart device may calculate a correlation between the first information and at least one preset scenario according to a feature vector of the information and a scene model matrix, where the correlation may be a real number.
  • the smart device before the determining, by the smart device, the correlation between the first information and the at least one preset scenario, the smart device further includes:
  • the smart device establishes a scene model, and the scene model includes at least one scene and at least one piece of information.
  • the scene model includes at least three types of features: a basic scene feature, an information category feature, and a keyword feature, wherein the basic scene feature is used to represent a specific scene, and the information category feature is used.
  • the keyword feature is used to represent the user's specific interest point.
  • the basic scene features are, for example, time, geographical location, light, etc., which are used to represent specific scenes, and may generally be real-world scenes, such as: work, home, road, etc.; information category characteristics are: entertainment, sports, finance, etc. It is used to represent the user's interest category; the keyword feature is the keyword itself extracted from the information, and is used to indicate the user's more granular interest points, such as the 18th National Congress, the Olympic Games, and the like.
  • the specific parameters of the user scene model can be obtained by means of machine learning.
  • the user scene model describes the scores of the content features extracted from the information mapped to the three dimensions of the basic scene feature, the information category feature, and the keyword feature. In the specific application process, the matrix form is used. Presented.
  • the scene model may be obtained according to a historical browsing record.
  • the smart device establishes the scene model according to a historical browsing record, where the information in the historical browsing record has the same or different according to the browsing time.
  • the weight of For example, it can be set such that the history close to the current time has a higher weight, and the history far from the current time has a lower weight.
  • the obtained historical information can be expressed in a matrix form, that is, a scene matrix model is established, and the records in each piece of information are one row in the matrix, including basic scene features, information category features, keyword features, information content features, User ratings for this information.
  • the range of the user's evaluation score for the information is set, for example, to 1-5 points, and the score may be obtained according to the user's reading action implicit feedback.
  • the mapping relationship between the user reading action and the information evaluation score may be as follows:
  • Rough browsing for example, reading time is less than 1 minute
  • the smart device can give the score of the user to each piece of information according to the above mapping relationship, and learn the user scene model corresponding to different scenarios according to the above, for example, the i-th preset scene is Si, and the corresponding scene model is Ui. .
  • the matrix decomposition technique can be used to reduce the dimension of the matrix.
  • the singular value decomposition (SVD) technique can transform the high-dimensional matrix into a (scene-hidden state). (hidden state - information) The product of two matrices. Then, after the actual online application system obtains a new information, it still extracts its content features, converts it into a vector form, and calculates the scores for each scene separately.
  • the smart device calculates a correlation between the first information and at least one preset scenario according to a feature vector of the information and a scene model matrix, where the correlation is a real number.
  • Step 603 If the correlation between the first information and the first preset scene is greater than the first preset value, the smart device presents the user to the user when determining that the user is in the first preset scene.
  • the first information If the correlation between the first information and the first preset scene is greater than the first preset value, the smart device presents the user to the user when determining that the user is in the first preset scene. The first information.
  • step 603 can include the following sub-steps:
  • Sub-step 1 The smart device acquires scene context information, where the scenario context information is used to determine a current context of the user;
  • Sub-step 2 The smart device calculates a similarity between the current context and each preset scenario according to the scenario context information.
  • Sub-step 3 The smart device calculates a presentation priority of the first information in the current context according to the similarity between the current context and each preset scenario, and the correlation between the first information and each preset scenario. And when the priority is greater than the second preset value, presenting the first information to the user.
  • the scenario matrix described above may be used to calculate the similarity between the current context and each preset scenario according to the context information of the current context, and predict the scenario in which the current context is most consistent.
  • the score of the information corresponding to each scene model ⁇ Ui ⁇ is calculated separately. If the correlation between the first information and the first preset scene is greater than the first preset value, the first information is at least one scene ( If the content is the user's attention, the current situation is determined to be the first scene, and may be determined by calculating the presentation priority of the first information.
  • comparing the presentation priority of the first information with the second preset value if it is greater than the second preset value, determining that the first information is strongly related to the current situation of the user, and immediately pushing the information.
  • step 603 may be performed again after waiting for a preset time.
  • the smart device acquires the scenario context information, which may be obtained by managing one or more context data sources, where the one or more context data sources include, but are not limited to, various types of sensors, social media records, and application logs. Etc., to obtain context information about the user's current environment and user behavior.
  • Ways to obtain various types of context information include, but are not limited to, obtaining time information through a local device clock and/or a time server; obtaining geographic location information by GPS and/or cellular triangulation; detecting whether the current environment is noisy through a microphone; detecting by a light sensor The intensity of ambient light; detecting whether the user is moving through a motion sensor; marking the user's behavioral activities through social media recording; by applying logs, such as e-mail Get information about the user's schedule, such as items, contacts, calendars, etc.
  • the similarity values of the current context of the user and the scene ⁇ Si ⁇ corresponding to each scene model ⁇ Ui ⁇ may be respectively calculated in each dimension.
  • the presentation priority P of the first information can be calculated as follows:
  • Vi is the score of the information in the scene model Ui; wi is the similarity value of the scene Si corresponding to the current situation of the user and the scene model Ui.
  • the optional step 604 is further included as follows:
  • Step 604 If it is determined that the correlation between the first information and all preset scenarios is less than the first preset value, the first information is used as spam.
  • the first preset value may be set to 3, that is, corresponding to the “rough browsing” in the above-mentioned user reading action, that is, the information that the user does not browse at all is directly filtered as spam.
  • step 605 may be further included:
  • Step 605 The smart device captures reading action information of the user, and updates the scene model according to the reading action information.
  • the smart device captures user read action information as implicit feedback of the user on the current scene model, and parses the content feature of the information, and updates the user scene model accordingly.
  • the common reading action of the user who needs to capture may be: whether the user sets the information as spam, whether the user clicks to read the information, the user roughly or carefully browses the information, and the user reads and forwards the information for a long time.
  • These actions, as implicit feedback can be used to obtain the user's rating of the information, so that the information is used as an updated corpus for updating the user scene model.
  • the keyword of the read content can be obtained, and the specific event of the user's attention content is accurately extracted, thereby updating Key words of user scene model Sign.
  • the corresponding user scene model is updated according to the captured user action feedback and content feature extraction. That is, the newly obtained user reading record is added to the training corpus, and the user scene matrix is recalculated.
  • the update learning of the model may include the following two methods: 1) fixing the size of the training corpus, continuously adding new records, and deleting the oldest records; 2) assigning different weights to each record according to the old and new order of time. .
  • the correlation between the first information and the at least one preset scenario is determined, and if the correlation between the first information and the first preset scenario is greater than the first
  • the preset value is used by the smart device to present the first information to the user when determining that the user is in the first preset scenario. That is, it is possible to push or present only important urgent or information related to the current situation of the user according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation on the user can be reduced, and since the presented information is required by the user It can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • FIG. 7 is a flowchart of Embodiment 6 of the information presentation method of the present invention.
  • an intelligent device separately completes the process of receiving information from the network side, analyzing the information and the scene where the user is located, and presenting the entire process.
  • the smart device in this embodiment may be a mobile terminal or a wearable device, for example, a method for separately performing information presentation by a mobile terminal or a smart watch.
  • the difference between the embodiment and the embodiment shown in FIG. 6 is that, in this embodiment, the smart device directly determines the priority of the first information after receiving the first information, instead of first determining whether the first information is spam, and then determining whether the first information is spam. Determine if it is suitable for the current presentation.
  • the method in this embodiment may include:
  • Step 701 The smart device receives the first information provided by the communication network.
  • the first information includes any one of the following: text information, image information, audio information, and video information.
  • Step 702 The smart device determines a presentation priority of the first information in a current context.
  • the method for determining the priority of the first information in the current context may be different.
  • the preset scenario corresponding to the first information may be determined first, and then the current context of the user is determined according to the scenario context.
  • the preset scene corresponding to the first information if yes, then Determining that the first information has a higher priority in the current context, and vice versa, the first information has a lower priority in the current context.
  • the scenario context information may be acquired first; the similarity between the current context and each preset scenario is calculated according to the scenario context information, and the correlation between the first information and the preset scenario is calculated;
  • the similarity and the relevance of the first information to the preset scene calculate a presentation priority of the first information.
  • the above is merely an example of determining the priority of presentation, and embodiments of the present invention are not limited thereto.
  • Step 703 When the presentation priority of the first information in the current context is greater than or equal to the second preset value, the smart device presents the first information to the user.
  • step 702 and step 703 are performed again after waiting for the preset time.
  • the scenario context information may be acquired again after the preset time; the similarity between the current context and each preset scenario is calculated according to the scenario context information; and the similarity and the first information are related to the preset scenario according to the similarity
  • the degree of presentation of the first information is calculated.
  • the smart device presents the first information to the user until the presentation priority of the first information in the current context is greater than or equal to the second preset value.
  • the smart device may further capture the user's reading action information, and send the user reading action information to the mobile terminal.
  • the mobile terminal may parse the content feature of the information, and update the user scene model accordingly.
  • the common reading action of the user who needs to capture may be: whether the user sets the information as spam, whether the user clicks to read the information, the user roughly or carefully browses the information, and the user reads and forwards the information for a long time.
  • These actions can be used to obtain the user's rating of the information, so that the information is used as an updated corpus for updating the user scene model.
  • the keyword of the read content can be obtained, and the specific event of the user's attention content is accurately extracted, thereby updating Keyword characteristics of the user scene model.
  • the user scene model is updated, that is, the newly obtained user reading record is added to the training corpus, and the user scene matrix is recalculated.
  • the update learning of the model may include the following two methods: fixing the size of the training corpus, continuously adding new records, and deleting the oldest records; or assigning different weights to each record according to the old and new order of time.
  • the presentation priority of the first information in the current context is determined, and the first information is presented to the user. That is, it is possible to push or present only important urgent or information related to the current situation of the user according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation on the user can be reduced, and since the presented information is required by the user It can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • FIG. 8 is a schematic structural diagram of Embodiment 1 of the information screening apparatus of the present invention.
  • the apparatus 800 of this embodiment may include: a receiving module 801, a processing module 802, and a sending module 803, where
  • the receiving module 801 is configured to receive first information provided by the communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • the processing module 802 is configured to determine a presentation priority of the first information in a current context of the user
  • the sending module 803 is configured to send the first information to the wearable device when the presentation priority of the first information in the current context is greater than or equal to the second preset value, so that the wearable device is to the user Presenting the first information.
  • processing module 802 is specifically configured to:
  • scenario context information of the wearable device where the scenario context information is used to determine a current context of the user
  • processing module 802 is further configured to:
  • the step of determining the presentation priority of the first information in the current context is performed again after waiting for the preset time.
  • the scene context information of the wearable device is obtained from the wearable device again after the preset time, the scene context information is used to determine the current context of the user; and the current context information is calculated according to the scenario context information.
  • processing module 802 is further configured to:
  • the scene model is used to determine a correlation between the first information and a preset scene, and the scene model includes at least three types of features: a basic scene feature, an information category feature, and a keyword feature.
  • processing module 802 is specifically configured to:
  • processing module 802 is specifically configured to:
  • the history browsing record of the first information includes: a recording time, a basic scene feature when the first information is received, and an information category of the first information
  • the receiving module 801 is further configured to:
  • the device in this embodiment can be used in conjunction with the information presentation device shown in FIG. 10 to implement the technical solution of the method embodiment shown in FIG. 1.
  • the implementation principle is similar, and details are not described herein again.
  • the apparatus of this embodiment after receiving the first information provided by the communication network, determines the presentation priority of the first information in the current context, and sends the first information to the wearable device, so that the wearable The device presents the first information to the user. That is, it is possible to push or present only important urgent or information related to the current situation of the user according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation on the user can be reduced, and since the presented information is required by the user It can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • FIG. 9 is a schematic structural diagram of Embodiment 2 of the information screening apparatus of the present invention.
  • the apparatus 900 of this embodiment may include: a receiving module 901, a processing module 902, and a sending module 903, where
  • the receiving module 901 is configured to receive first information provided by the communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • the processing module 902 is configured to determine a correlation between the first information and the at least one preset scenario, where the correlation between the first information and the at least one preset scenario is greater than or equal to the first preset value, a presentation priority of the first information in a current context of the user;
  • the sending module 903 is configured to send the first information to the wearable device when the presentation priority of the first information in the current context is greater than or equal to the second preset value, so that the wearable device is to the user Presenting the first information.
  • processing module 902 is further configured to:
  • the first information After determining the correlation between the first information and the at least one preset scene, when the correlation between the first information and all the preset scenes is less than the first preset value, the first information is used as the spam information. .
  • processing module 902 is specifically configured to:
  • scenario context information of the wearable device where the scenario context information is used to determine a current context of the user
  • processing module 902 is further configured to:
  • the presentation priority of the first information in the current context After determining the presentation priority of the first information in the current context, when the presentation priority of the first information in the current context is less than the second preset value, waiting for the preset time, performing the determination again.
  • the step of presenting the priority of the first information in the current context That is, the scenario context information of the wearable device is obtained again from the wearable device after waiting for a preset time, the scenario context information is used to determine the current context of the user; and the current context is calculated according to the scenario context information. a context matching preset scenario, and calculating a correlation degree between the first information and the preset scenario; determining, according to the correlation degree, a presentation priority of the first information in a current context.
  • processing module 902 is further configured to:
  • the scene model includes at least three types of features: basic scene features, information category features, and keyword features.
  • processing module 902 is specifically configured to:
  • processing module 902 is specifically configured to:
  • the history browsing record of the first information includes: a recording time, a basic scene feature when the first information is received, and an information category of the first information
  • the receiving module 901 is further configured to:
  • the device in this embodiment can be used in conjunction with the information presentation device shown in FIG. 10 or FIG. 11 to implement the technical solution of the method embodiment shown in FIG. 2 or FIG. 3, and the implementation principle is similar, and details are not described herein again.
  • the mobile terminal after the first information provided by the communication network is received by the mobile terminal, the correlation between the first information and the at least one preset scene is determined, and if the correlation between the first information and the first preset scene is greater than the first And a predetermined value, the mobile terminal sends the first information to the wearable device when the user is determined to be in the first preset scenario, so that the wearable device presents the first information to the user. That is, it is possible to push or present only important urgent or information related to the current situation of the user according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation on the user can be reduced, and since the presented information is required by the user It can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • FIG. 10 is a schematic structural diagram of Embodiment 1 of an information presentation apparatus according to the present invention.
  • the apparatus 1000 of this embodiment may include: a receiving module 1001 and a presentation module 1002, where
  • the receiving module 1001 is configured to receive, by the mobile terminal, a presentation priority in the current context.
  • the first information sent after being greater than the second preset value, the first information includes any one of the following: text information, image information, audio information, and video information;
  • the presentation module 1002 is configured to present the first information to a user.
  • the device of this embodiment may be combined with the information screening device shown in FIG. 8 or FIG. 9 for performing the technical solution of the method embodiment shown in FIG. 1, FIG. 2 or FIG. 3 and the method embodiment of FIG.
  • the technical solution has similar implementation principles and will not be described here.
  • the first information that is filtered by the mobile terminal is received, and the correlation between the first information and the first preset scene is greater than the first pre-determination due to the correlation between the first information and the at least one preset scenario.
  • the value is set, so that only important urgent or information related to the current situation of the user can be pushed or presented according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation to the user can be reduced, and since the presented information is What the user needs can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • FIG. 11 is a schematic structural diagram of Embodiment 2 of the information presentation apparatus of the present invention. As shown in FIG. 11, the apparatus 1100 of this embodiment may further include: a capture module 1003 and a sending module 1004, where ,
  • the capturing module 1003 is configured to: after the rendering module 1002 presents the first information to the user, capture the reading action information, where the reading action information includes at least: whether to delete the first information, whether to read The first information, the duration of reading the first information, whether to forward the first information, and the sending module 1004, may be configured to send the reading action information to the mobile terminal, so that the mobile terminal is configured according to the The reading action information updates the scene model.
  • the receiving module 1001 is further configured to receive a scenario context information request sent by the mobile terminal, and the sending module 1004 is further configured to send scenario context information to the mobile terminal.
  • the presentation module 1002 is specifically configured to:
  • the device of this embodiment can be used in conjunction with the information screening device shown in FIG. 8 or FIG. 9 to implement the technical solution of the method embodiment shown in FIG. 1 , FIG. 2 or FIG. 3 , and the implementation principle and technical effect are similar. I will not repeat them here.
  • FIG. 12 is a schematic structural diagram of Embodiment 3 of an information presentation apparatus according to the present invention.
  • the information presentation apparatus 1200 of this embodiment can separately complete the process of receiving, filtering, and presenting information.
  • the apparatus of this embodiment may include: a receiving module 1201, a processing module 1202, and a presentation module 1203, where
  • the receiving module 1201 is configured to receive first information provided by the communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • the processing module 1202 is configured to determine a correlation between the first information and the at least one preset scenario, and determine, when the correlation between the first information and the at least one preset scenario is greater than or equal to the first preset value, a presentation priority of the first information in a current context;
  • the presentation module 1203 is configured to present the first information to the user when the presentation priority of the first information in the current context is greater than or equal to the second preset value.
  • processing module 1202 is further configured to:
  • the mobile terminal determines the correlation between the first information and the at least one preset scene, when the correlation between the first information and all the preset scenes is less than the first preset value, the first Information as spam.
  • processing module 1202 is specifically configured to:
  • processing module 1202 is specifically configured to:
  • the presentation priority of the first information in the current context After determining the presentation priority of the first information in the current context, when the presentation priority of the first information in the current context is less than the second preset value, waiting for the preset time, performing the determination again.
  • the step of presenting the priority of the first information in the current context That is, the scene context information is acquired again after waiting for the preset time; the similarity between the current context and each preset scene is calculated according to the scene context information; and the similarity and the correlation between the first information and the preset scene are determined according to the similarity degree Calculating a presentation priority of the first information.
  • processing module 1202 is further configured to:
  • the scene model is used to determine a correlation between the first information and a preset scene, and the scene model includes at least three types of features: a basic scene feature, an information category feature, and a keyword feature.
  • processing module 1202 is specifically configured to:
  • processing module 1202 is specifically configured to:
  • the history browsing record of the first information includes: a recording time, a basic scene feature when the first information is received, and an information category of the first information
  • the presentation module 1203 is specifically configured to:
  • the device in this embodiment can be used to implement the technical solution of the method embodiment shown in FIG. 6, and the implementation principle is similar, and details are not described herein again.
  • the first information that is filtered by the mobile terminal is received, and the correlation between the first information and the first preset scene is greater than the first pre-determination due to the correlation between the first information and the at least one preset scenario.
  • the value is set, so that only important urgent or information related to the current situation of the user can be pushed or presented according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation to the user can be reduced, and since the presented information is What the user needs can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • FIG. 13 is a schematic structural diagram of Embodiment 4 of the information presentation apparatus of the present invention. As shown in FIG. 13, the apparatus 1300 of this embodiment may further include: a capture module 1204, based on the apparatus shown in FIG.
  • the capturing module 1204 may be configured to: after the first information is presented in the current context, the first information is used to capture the user's reading action information, and according to the The reading action information updates the scene model.
  • FIG. 14 is a schematic structural diagram of Embodiment 5 of an information presentation apparatus according to the present invention.
  • the information presentation apparatus of this embodiment can separately complete the process of receiving, filtering, and presenting information.
  • the apparatus 1400 of this embodiment may include: a receiving module 1401, a processing module 1402, and a presentation module 1403, where
  • the receiving module 1401 is configured to receive first information provided by the communication network, where the first information includes any one of the following: text information, image information, audio information, and video information;
  • the processing module 1402 is configured to determine a presentation priority of the first information in a current context
  • the presentation module 1403 is configured to present the first information to the user when the presentation priority of the first information in the current context is greater than or equal to the second preset value.
  • processing module 1402 is specifically configured to:
  • processing module 1402 is specifically configured to:
  • the device in this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 7.
  • the implementation principle is similar, and details are not described herein again.
  • the presentation priority of the first information in the current context after receiving the first information provided by the communication network, determining the presentation priority of the first information in the current context, and presenting the first information to the user. That is, it is possible to push or present only important urgent or information related to the current situation of the user according to the current situation of the user, and therefore, the interference of the information unrelated to the current situation on the user can be reduced, and since the presented information is required by the user It can improve the possibility that the user carefully reads the information and can improve the rendering effect.
  • the sub-steps can be accomplished by hardware associated with the program instructions.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Abstract

本发明实施例提供一种信息呈现方法和设备。其中,方法包括:移动终端接收通信网络提供的第一信息;所述移动终端确定所述第一信息在用户当前情境下的呈现优先级;当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。本发明实施例提供的信息呈现方法和设备,能够按用户场景来呈现信息,改善呈现效果,并减少同当前情境无关的信息对用户的干扰。

Description

信息呈现方法和设备 技术领域
本发明实施例涉及通信技术,尤其涉及一种信息呈现方法和设备。
背景技术
随着通信技术以及便携式电子设备的发展,用户对移动终端以及穿戴式设备的使用依赖越来越大。而移动终端或穿戴式设备上接收的信息泛滥,这些大量的信息中,除了垃圾信息之外,还有些内容虽然是用户需要的,但是用户不期望在当前时刻和环境下阅读,例如,用户当前处于度假期间,不期望接收同工作相关的信息;或者,用户当前处于会议期间,此时不期望接收娱乐信息。尤其是对于穿戴式设备,例如智能手表,用户更希望在大部分时候智能手表安静的存在,只在真正有要紧之事需要唤起用户注意的时候才推送信息,而不是任何时候都被各种无缝信息和通知占满。
而现有技术中,移动终端或穿戴式设备通常不加过滤的向用户呈现所有接收到的信息,导致信息对用户造成干扰,并且呈现的效果不好。
发明内容
本发明实施例提供一种信息呈现方法和设备,以克服现有技术的信息呈现效果不好,对用户干扰大的问题。
第一方面,本发明实施例提供一种信息呈现方法,包括:
移动终端接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
所述移动终端确定所述第一信息在用户当前情境下的呈现优先级;
当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
在第一方面的第一种可能的实现方式中,所述移动终端确定所述第一信息在当前情境下的呈现优先级,包括:
所述移动终端向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
所述移动终端根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
所述移动终端根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
根据第一方面或第一方面的第一种可能的实现方式,在第二种可能的实现方式中,在所述移动终端计算所述第一信息在当前情境的呈现优先级之后,还包括:
当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
所述移动终端根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
所述移动终端根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
根据第一方面、第一方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,在所述移动终端确定所述第一信息在当前情境的呈现优先级之前,还包括:
所述移动终端建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
根据第一方面的第三种可能的实现方式,在第四种可能的实现方式中,所述移动终端确定所述第一信息与所述预设场景的相关度,包括:
所述移动终端解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与所述预设场景的相关度。
根据第一方面的第三种或第四种可能的实现方式,在第五种可能的实现方式中,所述移动终端建立场景模型,包括:
所述移动终端根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息 时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
根据第一方面的第三种至第五种可能的实现方式中的任意一种,在第六种可能的实现方式中,在所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,还包括:
所述移动终端接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
第二方面,本发明实施例提供一种信息呈现方法,包括:
移动终端接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
所述移动终端确定所述第一信息与至少一个预设场景的相关度;
当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,所述移动终端确定所述第一信息在用户当前情境下的呈现优先级;
当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
在第二方面的第一种可能的实现方式中,在所述移动终端确定所述第一信息与至少一个预设场景的相关度之后,还包括:
当所述第一信息与所有预设场景的相关度均小于第一预设值时,所述移动终端将所述第一信息作为垃圾信息。
根据第二方面或第二方面的第一种可能的实现方式,在第二种可能的实现方式中,所述移动终端确定所述第一信息在当前情境的呈现优先级,包括:
所述移动终端向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
所述移动终端根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
所述移动终端根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
根据第二方面、第二方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,在所述移动终端计算所述第一信息在当前情境下的呈现优先级之后,还包括:
当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
所述移动终端根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
所述移动终端根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
根据第二方面、第二方面的第一种至第三种可能的实现方式中的任意一种,在第四种可能的实现方式中,在所述移动终端确定所述第一信息与至少一个预设场景的相关度之前,还包括:
所述移动终端建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
根据第二方面的第四种可能的实现方式,在第五种可能的实现方式中,所述移动终端确定所述第一信息与至少一个预设场景的相关度,包括:
所述移动终端解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
根据第二方面的第四种或第五种可能的实现方式,在第六种可能的实现方式中,所述移动终端建立场景模型,包括:
所述移动终端根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
根据第二方面的第四种至第六种可能的实现方式中的任意一种,在第七种可能的实现方式中,在所述当所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,还包括:
所述移动终端接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
第三方面,本发明实施例提供一种信息呈现方法,包括:
穿戴式设备接收移动终端在确定在当前情境的呈现优先级大于第二预设值后发送的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
所述穿戴式设备向用户呈现所述第一信息。
在第三方面的第一种可能的实现方式中,在所述穿戴式设备向用户呈现所述第一信息之后,还包括:
所述穿戴式设备捕获阅读动作信息,所述阅读动作信息至少包括:是否删除所述第一信息、是否阅读所述第一信息、阅读所述第一信息的时长、是否转发所述第一信息。
所述穿戴式设备向所述移动终端发送所述阅读动作信息,以使所述移动终端根据所述阅读动作信息更新所述场景模型。
根据第三方面或第三方面的第一种可能的实现方式,在第二种可能的实现方式中,还包括:
所述穿戴式设备接收所述移动终端发送的场景上下文信息请求;
所述穿戴式设备向所述移动终端发送场景上下文信息。
根据第三方面、第三方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,所述穿戴式设备向用户呈现所述第一信息,包括:
所述穿戴式设备发出提示信息。
第四方面,本发明实施例提供一种信息呈现方法,包括:
智能设备接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
所述智能设备确定所述第一信息与至少一个预设场景的相关度;
当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,所述智能设备确定所述第一信息在当前情境下的呈现优先级;
当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,所述智能设备向用户呈现所述第一信息。
在第四方面的第一种可能的实现方式中,在所述智能设备确定所述第一信息与至少一个预设场景的相关度之后,还包括:
当所述第一信息与所有预设场景的相关度均小于第一预设值时,所述智能设备将所述第一信息作为垃圾信息。
根据第四方面或第四方面的第一种可能的实现方式,在第二种可能的实现方式中,所述智能设备确定所述第一信息在当前情境的呈现优先级,包括:
所述智能设备获取场景上下文信息,所述场景上下文信息用于确定用户的当前情境;
所述智能设备根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
所述智能设备根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
根据第四方面、第四方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,在所述智能设备确定所述第一信息在当前情境的呈现优先级之后,还包括:
当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
所述智能设备根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
所述智能设备根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
根据第四方面、第四方面的第一种至第三种可能的实现方式中的任意一种,在第四种可能的实现方式中,在所述智能设备确定所述第一信息与至少一个预设场景的相关度之前,还包括:
所述智能设备建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
根据第四方面的第四种可能的实现方式,在第五种可能的实现方式中,所述智能设备确定所述第一信息与至少一个预设场景的相关度,包括:
所述智能设备解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
根据第四方面的第四种或第五种可能的实现方式,在第六种可能的实现方式中,所述智能设备建立场景模型,包括:
所述智能设备根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
根据第四方面的第四种至第六种可能的实现方式中的任意一种,在第七种可能的实现方式中,在所述当所述第一信息在当前情境的呈现优先级大于第二预设值时,向用户呈现所述第一信息之后,还包括:
所述智能设备捕获用户的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
根据第四方面、第四方面的第一种至第七种可能的实现方式中的任意一种,在第八种可能的实现方式中,所述智能设备向用户呈现所述第一信息,包括:
所述智能设备发出提示信息。
第五方面,本发明实施例提供一种信息呈现方法,包括:
智能设备接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
所述智能设备确定所述第一信息在当前情境下的呈现优先级;
当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,所述智能设备向用户呈现所述第一信息。
在第五方面的第一种可能的实现方式中,所述智能设备确定所述第一信息在当前情境下的呈现优先级,包括:
所述智能设备获取场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
所述智能设备根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
所述智能设备根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
根据第五方面或第五方面的第一种可能的实现方式,在第二种可能的实现方式中,在所述智能设备确定所述第一信息在当前情境下的呈现优先级之后,还包括:
当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
所述智能设备根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
所述智能设备根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
第六方面,本发明实施例提供一种信息筛选装置,包括:
接收模块,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
处理模块,用于确定所述第一信息在用户当前情境下的呈现优先级;
发送模块,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
在第六方面的第一种可能的实现方式中,所述处理模块具体用于:
向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;根据所述相关度确定所述第一 信息在当前情境下的呈现优先级。
根据第六方面或第六方面的第一种可能的实现方式,在第二种可能的实现方式中,所述处理模块,还用于:
当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
根据第六方面、第六方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,所述处理模块还用于:
建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
根据第六方面的第三种可能的实现方式,在第四种可能的实现方式中,所述处理模块具体用于:
解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与所述预设场景的相关度。
根据第六方面的第三种或第四种可能的实现方式,在第五种可能的实现方式中,所述处理模块具体用于:
根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
根据第六方面的第三种至第五种可能的实现方式中的任意一种,在第六种可能的实现方式中,所述接收模块还用于:
在所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读 动作信息更新所述场景模型。
第七方面,本发明实施例提供一种信息筛选装置,包括:
接收模块,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
处理模块,用于确定所述第一信息与至少一个预设场景的相关度;当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,确定所述第一信息在用户当前情境下的呈现优先级;
发送模块,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
在第七方面的第一种可能的实现方式中,所述处理模块,还用于:
在确定所述第一信息与至少一个预设场景的相关度之后,当所述第一信息与所有预设场景的相关度均小于第一预设值时,将所述第一信息作为垃圾信息。
根据第七方面或第七方面的第一种可能的实现方式,在第二种可能的实现方式中,所述处理模块具体用于:
向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
根据第七方面、第七方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,所述处理模块还用于:
在确定所述第一信息在当前情境下的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
根据第七方面、第七方面的第一种至第三种可能的实现方式中的任意一种,在第四种可能的实现方式中,所述处理模块还用于:
在确定所述第一信息与至少一个预设场景的相关度之前,建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
根据第七方面的第四种可能的实现方式,在第五种可能的实现方式中,所述处理模块具体用于:
解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
根据第七方面的第四种或第五种可能的实现方式,在第六种可能的实现方式中,所述处理模块具体用于:
根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
根据第七方面的第四种至第六种可能的实现方式中的任意一种,在第七种可能的实现方式中,所述接收模块还用于:
在所述当所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
第八方面,本发明实施例提供一种信息呈现装置,包括:
接收模块,用于接收移动终端在确定在当前情境的呈现优先级大于第二预设值后发送的第一信息,所所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
呈现模块,用于向用户呈现所述第一信息。
在第八方面的第一种可能的实现方式中,还包括:
捕获模块,用于在所述呈现模块向用户呈现所述第一信息之后,捕 获阅读动作信息,所述阅读动作信息至少包括:是否删除所述第一信息、是否阅读所述第一信息、阅读所述第一信息的时长、是否转发所述第一信息。
发送模块,用于向所述移动终端发送所述阅读动作信息,以使所述移动终端根据所述阅读动作信息更新所述场景模型。
根据第八方面或第八方面的第一种可能的实现方式,在第二种可能的实现方式中:
所述接收模块,还用于接收所述移动终端发送的场景上下文信息请求;
所述发送模块,还用于向所述移动终端发送场景上下文信息。
根据第八方面、第八方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,所述呈现模块具体用于:
发出提示信息。
第九方面,本发明实施例提供一种信息呈现装置,包括:
接收模块,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
处理模块,用于确定所述第一信息与至少一个预设场景的相关度;
当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,确定所述第一信息在当前情境下的呈现优先级;
呈现模块,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向用户呈现所述第一信息。
在第九方面的第一种可能的实现方式中,所述处理模块还用于:
在确定所述第一信息与至少一个预设场景的相关度之后,当所述第一信息与所有预设场景的相关度均小于第一预设值时,将所述第一信息作为垃圾信息。
根据第九方面或第九方面的第一种可能的实现方式,在第二种可能的实现方式中,所述处理模块具体用于:
获取场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
根据第九方面、第九方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,所述处理模块具体用于:
在确定所述第一信息在当前情境的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
根据第九方面、第九方面的第一种至第三种可能的实现方式中的任意一种,在第四种可能的实现方式中,所述处理模块还用于:
在确定所述第一信息与至少一个预设场景的相关度之前,建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
根据第九方面的第四种可能的实现方式,在第五种可能的实现方式中,所述处理模块具体用于:
解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
根据第九方面的第四种或第五种可能的实现方式,在第六种可能的实现方式中,所述处理模块具体用于:
根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
根据第九方面的第四种至第六种可能的实现方式中的任意一种,在第七种可能的实现方式中,还包括:
捕获模块,用于当所述第一信息在当前情境下的呈现优先级大于第 二预设值时,向用户呈现所述第一信息之后,捕获用户的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
根据第九方面、第九方面的第一种至第七种可能的实现方式中的任意一种,在第八种可能的实现方式中,所述呈现模块具体用于:
发出提示信息。
第十方面,本发明实施例提供一种信息呈现装置,包括:
接收模块,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
处理模块,用于确定所述第一信息在当前情境下的呈现优先级;
呈现模块,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向用户呈现所述第一信息。
在第十方面的第一种可能的实现方式中,所述处理模块具体用于:
获取场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
根据第十方面或第十方面的第一种可能的实现方式,在第二种可能的实现方式中,所述处理模块具体用于:
在确定所述第一信息在当前情境的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
本发明实施例提供的信息呈现方法和设备,通过在接收通信网络提供的第一信息后,确定该第一信息在用户当前情境的呈现优先级,只有当所述第一信息在当前情境的呈现优先级大于或等于第一预设值时,向用户呈现所述第一信息,也就是说,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无 关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
本发明实施例提供的信息呈现方法和设备,通过在接收通信网络提供的第一信息后,确定该第一信息与至少一个预设场景的相关度,只有所述第一信息与第一预设场景的相关度大于第一预设值,则所述移动终端在确定用户处于所述第一预设场景时,向用户呈现所述第一信息,也就是说,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明信息呈现方法实施例一的信令流程图;
图2为本发明信息呈现方法实施例二的信令流程图;
图3为本发明信息呈现方法实施例三的流程图;
图4为用户场景训练模型的一个示例的示意图;
图5为本发明信息呈现方法实施例四的流程图;
图6为本发明信息呈现方法实施例五的流程图;
图7为本发明信息呈现方法实施例六的流程图;
图8为本发明信息筛选装置实施例一的结构示意图;
图9为本发明信息筛选装置实施例二的结构示意图;
图10为本发明信息呈现装置实施例一的结构示意图;
图11为本发明信息呈现装置实施例二的结构示意图;
图12为本发明信息呈现装置实施例三的结构示意图;
图13为本发明信息呈现装置实施例四的结构示意图;
图14为本发明信息呈现装置实施例五的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
针对目前移动终端上接收的信息泛滥,推送和呈现的效果不好,且对用户造成骚扰的问题,本发明提出一种根据用户情境的自适应信息推送呈现方法,由移动终端或智能设备在接收到信息后,根据用户当前所处情境及相应的用户场景模型,选择重要紧急或同用户当前情境强相关的信息,进行呈现。其中,本发明可以有两种实现方式,第一种是移动终端和穿戴式设备配合完成,其中,移动终端充当信息锚点和场景分析的作用,负责向穿戴式设备转发和推送信息,由移动终端从网络侧接收信息,并对信息以及用户所处的场景进行分析,由穿戴式设备进行信息的呈现。之后,穿戴式设备还可以捕获用户的阅读动作,反馈给移动终端,用于移动终端分析并更新用户场景模型。其中,移动终端可以为移动终端,穿戴式设备可以为智能手表,智能眼镜等设备。第二种是由一个智能设备单独完成从网络侧接收信息、对信息以及用户所处的场景进行分析,以及呈现整个过程。第二种情况下的智能设备可以为移动终端或穿戴式设备。下面将分别进行描述。
本发明各个实施例中的通信网络可以是蜂窝网络,如全球移动通信系统(Global System for Mobile Communications,简称:GSM)、通用移动通信系统(Universal Mobile Telecommunications System简称:UMTS)、长期演进(Long Term Evolution,简称:LTE)、码分多址(Code Division Multiple Access,简称:CDMA)等、无线局域网(Wireless Local Area Networks,简称:WLAN)、近距离通信网络(Near Field Communication,简称:NFC)等网络。
本发明各个实施例中的移动终端包括但不限于手机、智能手机、平 板电脑、其他手持设备等,本发明各个实施例中的穿戴式设备包括但不限于智能手表、智能眼镜等。
图1为本发明信息呈现方法实施例一的信令流程图,本实施例由移动终端和穿戴式设备共同完成。通常,用户可以同时拥有移动终端和穿戴式设备,其中,移动终端的处理能力相对穿戴式设备更强大,可以承担各种任务分析和处理的职责;穿戴式设备由于其穿戴特性,同用户关系最密切,可以用于实时提醒或呈现重要紧急的信息内容。如图1所示,本实施例的方法可以包括:
步骤101、移动终端接收通信网络提供的第一信息。
其中,该第一信息可以是文本信息、图像、音视频信息等各类信息。
步骤102、所述移动终端确定所述第一信息在用户当前情境下的呈现优先级。
可选地,步骤102中确定所述第一信息在用户当前情境下的呈现优先级的方法可以包括:
移动终端向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,其中,场景上下文信息用于确定所述用户的当前情境;根据所述场景上下文信息计算当前情境与各个预设场景的相似度,并计算所述第一信息与所述预设场景的相关度;然后,根据所述相似度和所述第一信息与所述预设场景的相关度计算所述第一信息在当前情境下的呈现优先级。
具体实现时,移动终端可以解析所述接收的第一信息,并提取相关特征。以文本信息为例,提取的特征可以包括:该信息原作者的特征;该信息原作者与用户的社交关系;内容特征,如:内容包含了哪些词,这些词出现的频度,是否包含某关键词或符号,用户历史上喜爱的信息与该信息的相似度;全局特征,如:全局范围内多少用户也喜爱该信息,有多少其他信息内包含指向该信息的链接。
然后根据一定的算法计算该第一信息与各个预设场景的相关度,或者,计算该第一信息对应于哪个预设场景,可以将适合推送该第一信息的预设场景,即高度相关(相关度大于预设值)的预设场景称为第一预设 场景。预设场景例如可以包括:工作、家中、路上等,可以采用场景模型定义用户在对应特定场景下关注的信息所具有的内容特征。通过计算所述第一信息在各场景模型中的用户评价得分,可以获知该第一信息在各对应预设场景下用户的关注程度,即该第一信息与各个一个预设场景的相关度。
在具体实现时,所述移动终端可以根据信息的特征向量和场景模型矩阵计算所述第一信息与至少一个预设场景的相关度,所述相关度可以为实数。
在此之前,移动终端可以先建立场景模型,在优选的一种实施方式中,场景模型可包含至少3类特征:基本场景特征、信息类别特征、关键词特征,其中,所述基本场景特征用于表示具体场景,所述信息类别特征用于表示用户的兴趣类别,所述关键词特征用于表示用户的具体兴趣点。
基本场景特征例如为:时间、地理位置、光线等,用于表示具体场景,一般可以为现实世界的场景,如:工作、家中、路上等;信息类别特征例如为:娱乐、体育、财经等,用于表示用户的兴趣类别;关键词特征为从信息中抽取出的关键词本身,用于表示用户更加细粒度的兴趣点,例如:十八大、奥运会等。
用户场景模型的具体参数可以通过机器学习的方式获取。用户场景模型描述了对所述信息提取的内容特征映射到基本场景特征、信息类别特征、关键词特征三个维度后的得分,在具体应用过程中,以矩阵形式呈现。
可选地,所述立场景模型可以根据历史浏览记录获得,具体地,所述移动终端根据历史浏览记录建立所述场景模型,其中,所述历史浏览记录中的信息根据浏览时间具有相同或不同的权重。例如,可以设置为:距离当前时间近的历史记录具有较高的权重,距离当前时间远的历史记录具有较低的权重。
用户场景模型参数可以通过机器学习方式获取。在训练阶段,将获得的历史信息表示成矩阵形式,每条信息中的记录为矩阵中的一行,包括基本场景特征、信息类别特征、关键词特征、信息内容特征、用户对 此信息的评分。
在一种实现方式中,用户对信息的评价分值的范围例如设置为1-5分,该评分可以根据用户的阅读动作隐式反馈获取。作为一种实施方式,用户阅读动作同信息评价分值的映射关系可以如下:
空:未获得用户对该信息的评价;
1:用户将该信息置为垃圾信息;
2:该信息未被点击阅读;
3:粗略浏览(例如阅读时间小于1分钟);
4:仔细阅读(例如阅读时间大于3分钟);
5:长时间阅读并转发(例如阅读时间大于3分钟并转发)。
移动终端可以根据如上映射关系,对每条信息给出用户的评分分值,并据此学习得到对应不同场景的用户场景模型,例如第i个预设场景为Si,其对应的场景模型为Ui。
同时,移动终端还可以确定当前情境是否为适合推送该第一信息的第一预设场景,例如,若所述第一信息在当前情境的呈现优先级大于或等于第一预设值,则可以确定为适合推送。
步骤103、当所述第一信息在当前情境的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
而当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次执行步骤102。具体的执行过程可以为:移动终端向所述穿戴式设备获取所述穿戴式设备的场景上下文信息;所述移动终端根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;所述移动终端根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
具体地,移动终端可以获取用于确定基本场景特征的参数,识别当前情境是否为第一预设场景,当确定用户处于所述第一预设场景时,向穿戴式设备发送所述第一信息。
或者,移动终端也可以根据基本场景特征,用户当前所处的情境同各场景模型{Ui}所对应场景{Si}的关联程度,即在各个维度上分别计算用 户当前所处情境与各场景模型{Ui}对应场景{Si}的相似度值的集合{wi}。
可选地,还可以根据该相似度值的集合{wi},计算该第一信息对应各个场景模型{Ui}的评分集合{Vi},计算该第一信息的呈现优先级,当呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息。
具体地,所述第一信息的呈现优先级可以采用如下公式计算:
P=w1*V1+w2*V2+…wj*Vj
其中,Vi为所述信息在场景模型Ui中的得分;wi为用户当前情境与场景模型Ui对应场景Si的相似度值,P为第一信息的呈现优先级。
进一步地,当所述移动终端判断出用户当前不处于第一预设场景时,或者,第一信息的呈现优先级小于第二预设值时,则可以等待一定的时间之后,再重新执行步骤102以及步骤103,以找到呈现该第一信息的合适的时机。
步骤104、所述穿戴式设备向用户呈现所述第一信息。
具体地,所述穿戴式设备可以在界面上显示所述第一信息,或者还可以发出提示音或震动,以及/或其他的通知方式,提醒用户查看。
可选地,在此之后,穿戴式设备还可以捕获用户的阅读动作信息,并将该用户阅读动作信息发送给移动终端,移动终端接收后可以解析所述信息的内容特征,据此更新用户场景模型。
常见的需要捕捉的用户的阅读动作可以是:用户是否将该信息置为垃圾信息,用户是否点击阅读该信息,用户粗略或仔细浏览该信息,用户长时间阅读并转发该信息等。这些动作作为隐式反馈,可以用于获取用户对该信息的评分情况,从而将所述信息作为更新的语料,用于更新用户场景模型。特别的,对于用户特别关注的内容,如用户长时间阅读并转发的内容(用户隐式打分为5分),可获取所述阅读内容的关键词,精确提取用户关注内容的具体事件,从而更新用户场景模型的关键词特征。
更新用户场景模型,即为将最新获得的用户阅读记录添加到训练语料中,重新计算的用户场景矩阵。具体的,模型的更新学习可以包括如下两种方法:固定训练语料的大小,不断添加新记录,并删除最老的记 录;或者,按照时间的新旧顺序,为每条记录分配不同的权重。
本实施例,通过移动终端接收通信网络提供的第一信息后,确定所述第一信息在当前情境的呈现优先级,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。即,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
上述实施例,对于步骤102中移动终端确定所述第一信息在用户当前情境的呈现优先级,一种可能的实现方式为:移动终端向所述穿戴式设备获取场景上下文信息;根据所述场景上下文信息计算同当前情境匹配的预设场景,即确定当前情境属于哪个预设场景,并计算所述第一信息与该预设场景的相关度,并根据所述相关度确定第一信息在当前情境的呈现优先级。具体地,可以事先确定第一信息与该预设场景的相关度与第一信息在当前情境的呈现优先级的映射关系,在一个实施例中,可以直接将第一信息与该预设场景的相关度作为第一信息在当前情境的呈现优先级。
例如,第一信息为娱乐信息,而移动终端根据场景上下文信息确定当前情境为会议场景,那么可以预见,移动终端计算所述第一信息与所述预设场景的相关度小于第一预设值,例如为1或0,相应地,移动终端可以据此确定第一信息在当前情境的呈现优先级也很小,小于第二预设值,即确定第一信息不适合在当前情境推送给用户。
需要说明的是,对于移动终端确定所述第一信息在用户当前情境的呈现优先级,还可以有其他的算法和其他的实现方式,本发明实施例对此不做限定。
图2为本发明信息呈现方法实施例二的信令流程图,本实施例由移动终端和穿戴式设备共同完成。通常,用户可以同时拥有移动终端和穿戴式设备,其中,移动终端的处理能力相对穿戴式设备更强大,可以承担各种任务分析和处理的职责;穿戴式设备由于其穿戴特性,同用户关系最密切,可以用于实时提醒或呈现重要紧急的信息内容。本实施例与图1 所示实施例的方法的实现过程不同,如图2所示,本实施例的方法可以包括:
步骤201、移动终端接收通信网络提供的第一信息。
其中,该第一信息可以是文本信息、图像、音视频信息等各类信息。
步骤202、所述移动终端确定所述第一信息与至少一个预设场景的相关度。
具体地,移动终端可以解析所述接收的第一信息,并提取相关特征。以文本信息为例,提取的特征可以包括:该信息原作者的特征;该信息原作者与用户的社交关系;内容特征,如:内容包含了哪些词,这些词出现的频度,是否包含某关键词或符号,用户历史上喜爱的信息与该信息的相似度;全局特征,如:全局范围内多少用户也喜爱该信息,有多少其他信息内包含指向该信息的链接。
然后根据一定的算法计算该第一信息与各个预设场景的相关度。预设场景例如可以包括:工作、家中、路上等,可以采用场景模型定义用户在对应特定场景下关注的信息所具有的内容特征。通过计算所述第一信息在各场景模型中的用户评价得分,可以获知该第一信息在各对应预设场景下用户的关注程度,即该第一信息与各个一个预设场景的相关度。
在具体实现时,所述移动终端可以根据信息的特征向量和场景模型矩阵计算所述第一信息与至少一个预设场景的相关度,所述相关度可以为实数。
在此之前,移动终端先建立场景模型,在优选的一种实施方式中,场景模型可包含至少3类特征:基本场景特征、信息类别特征、关键词特征,其中,所述基本场景特征用于表示具体场景,所述信息类别特征用于表示用户的兴趣类别,所述关键词特征用于表示用户的具体兴趣点。
基本场景特征例如为:时间、地理位置、光线等,用于表示具体场景,一般可以为现实世界的场景,如:工作、家中、路上等;信息类别特征例如为:娱乐、体育、财经等,用于表示用户的兴趣类别;关键词特征为从信息中抽取出的关键词本身,用于表示用户更加细粒度的兴趣 点,例如:十八大、奥运会等。
用户场景模型的具体参数可以通过机器学习的方式获取。用户场景模型描述了对所述信息提取的内容特征映射到基本场景特征、信息类别特征、关键词特征三个维度后的得分,在具体应用过程中,以矩阵形式呈现。
可选地,所述立场景模型可以根据第一信息的历史浏览记录获得,具体地,所述移动终端根据历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。例如,可以设置为:距离当前时间近的历史记录具有较高的权重,距离当前时间远的历史记录具有较低的权重。
在训练阶段,将获得的历史信息表示成矩阵形式,每条信息中的记录为矩阵中的一行,包括基本场景特征、信息类别特征、关键词特征、信息内容特征、用户对此信息的评分。
在一种实现方式中,用户对信息的评价分值的范围例如设置为1-5分,该评分可以根据用户的阅读动作隐式反馈获取。作为一种实施方式,用户阅读动作同信息评价分值的映射关系可以如下:
空:未获得用户对该信息的评价;
1:用户将该信息置为垃圾信息;
2:该信息未被点击阅读;
3:粗略浏览(例如阅读时间小于1分钟);
4:仔细阅读(例如阅读时间大于3分钟);
5:长时间阅读并转发(例如阅读时间大于3分钟并转发)。
移动终端可以根据如上映射关系,对每条信息给出用户的评分分值,并据此学习得到对应不同场景的用户场景模型,例如第i个预设场景为Si,其对应的场景模型为Ui。
步骤203、若所述第一信息与第一预设场景的相关度大于第一预设值,则所述移动终端在确定用户处于所述第一预设场景时,向穿戴式设备发送所述第一信息。
相应地,若确定所述第一信息与所有预设场景的相关度均小于第一预设值,则可以将所述第一信息作为垃圾信息,删除或过滤该第一信息。
此处,移动终端可以获取用于确定基本场景特征的参数,识别当前情境是否为第一预设场景,当确定用户处于所述第一预设场景时,向穿戴式设备发送所述第一信息。
或者,也可以根据基本场景特征,用户当前所处的情境同各场景模型{Ui}所对应场景{Si}的关联程度,即在各个维度上分别计算用户当前所处情境与各场景模型{Ui}对应场景{Si}的相似度值的集合{wi}。
可选地,还可以根据该相似度值的集合{wi},计算该第一信息对应各个场景模型{Ui}的评分集合{Vi},计算该第一信息的呈现优先级,当呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息。
具体地,所述第一信息的呈现优先级可以采用如下公式计算:
P=w1*V1+w2*V2+…wj*Vj
其中,Vi为所述信息在场景模型Ui中的得分;wi为用户当前情境与场景模型Ui对应场景Si的相似度值,P为第一信息的呈现优先级。
进一步地,当所述移动终端判断出用户当前不处于第一预设场景时,或者,第一信息的呈现优先级小于第二预设值时,则可以等待一定的时间之后,再重新执行步骤203,以找到呈现该第一信息的合适的时机。
步骤204、所述穿戴式设备向用户呈现所述第一信息。
进一步地,在此之后,穿戴式设备还可以捕获用户的阅读动作信息,并将该用户阅读动作信息发送给移动终端,移动终端接收后可以解析所述信息的内容特征,据此更新用户场景模型。
常见的需要捕捉的用户的阅读动作可以是:用户是否将该信息置为垃圾信息,用户是否点击阅读该信息,用户粗略或仔细浏览该信息,用户长时间阅读并转发该信息等。这些动作作为隐式反馈,可以用于获取用户对该信息的评分情况,从而将所述信息作为更新的语料,用于更新用户场景模型。特别的,对于用户特别关注的内容,如用户长时间阅读 并转发的内容(用户隐式打分为5分),可获取所述阅读内容的关键词,精确提取用户关注内容的具体事件,从而更新用户场景模型的关键词特征。
更新用户场景模型,即为将最新获得的用户阅读记录添加到训练语料中,重新计算的用户场景矩阵。具体的,模型的更新学习可以包括如下两种方法:固定训练语料的大小,不断添加新记录,并删除最老的记录;或者,按照时间的新旧顺序,为每条记录分配不同的权重。
本实施例,通过移动终端接收通信网络提供的第一信息后,确定所述第一信息与至少一个预设场景的相关度,若所述第一信息与第一预设场景的相关度大于第一预设值,则所述移动终端在确定用户处于所述第一预设场景时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。即,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
图3为本发明信息呈现方法实施例三的流程图,本实施例由移动终端和穿戴式设备共同完成,移动终端例如为移动终端,穿戴式设备例如为智能手表。但图3仅仅示出了移动终端所执行的步骤。本实施例在图2所示实施例的基础上,增加了移动终端建立场景模型的过程。如图3所示,本实施例的方法可以包括:
步骤301、移动终端建立场景模型。
所述场景模型包括至少一个场景和至少一个信息,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征,其中所述基本场景特征用于表示具体场景,所述信息类别特征用于表示用户的兴趣类别,所述关键词特征用于表示用户的具体兴趣点。
具体地,基本场景特征例如为:时间、地理位置、光线等,用于表示具体场景,一般可以为现实世界的场景,如:工作、家中、路上等;信息类别特征例如为:娱乐、体育、财经等,用于表示用户的兴趣类别;关键词特征为从信息中抽取出的关键词本身,用于表示用户更加细粒度的兴趣点,例如:十八大、奥运会等。
用户场景模型的具体参数可以通过机器学习的方式获取。用户场景模型描述了对所述信息提取的内容特征映射到基本场景特征、信息类别特征、关键词特征三个维度后的得分,在具体应用过程中,以矩阵形式呈现。
可选地,所述立场景模型可以根据历史浏览记录获得,具体地,所述移动终端根据历史浏览记录建立所述场景模型,其中,所述历史浏览记录中的信息根据浏览时间具有相同或不同的权重。例如,可以设置为:距离当前时间近的历史记录具有较高的权重,距离当前时间远的历史记录具有较低的权重。
如图4所示,图4为用户场景训练模型的一个示例的示意图。在训练阶段,将获得的历史信息表示成如图4所示的矩阵形式,每条信息中的记录为矩阵中的一行,包括基本场景特征、信息类别特征、关键词特征、信息内容特征、用户对此信息的评分。
在一种实现方式中,用户对信息的评价分值的范围例如设置为1-5分,该评分可以根据用户的阅读动作隐式反馈获取。作为一种实施方式,用户阅读动作同信息评价分值的映射关系可以如下:
空:未获得用户对该信息的评价;
1:用户将该信息置为垃圾信息;
2:该信息未被点击阅读;
3:粗略浏览(例如阅读时间小于1分钟);
4:仔细阅读(例如阅读时间大于3分钟);
5:长时间阅读并转发(例如阅读时间大于3分钟并转发)。
移动终端可以根据如上映射关系,对每条信息给出用户的评分分值,并据此学习得到对应不同场景的用户场景模型,例如第i个预设场景为Si,其对应的场景模型为Ui。
在实际应用中,由于矩阵的维度较高,可以使用矩阵分解的技术来对该矩阵进行降维处理,如奇异值分解(SVD)技术可以将高维矩阵转变成(场景-隐状态),(隐状态-信息)两个矩阵的乘积。然后,实际的在线应用系统获得一条新信息后,仍抽取其内容特征,将其转变为向量形式,并分别计算每个场景下对它的评分。同时,计算当前情境的上下文 信息,预测当前情境最符合的场景。
步骤302、移动终端接收通信网络提供的第一信息。
步骤303、移动终端根据信息的特征向量和场景模型矩阵计算所述第一信息与至少一个预设场景的相关度。
其中,所述场景模型矩阵根据第一信息的历史浏览记录计算获得,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
具体地,在首次接收到第一信息时,移动终端可以分别计算所述信息对应各场景模型{Ui}的相关度,所述相关度可以为实数。
步骤304、若移动终端确定所述第一信息与所有预设场景的相关度均小于第一预设值,则将所述第一信息作为垃圾信息。
如果计算得到所述第一信息的所有评分都低于第一预设值,则判断所述第一信息为垃圾信息;否则,保存该评分集合{Vi}。在一种可能的实施方式中,第一预设值可以设置为3,即对应用户阅读动作中的“粗略浏览”,即对于用户根本不会仔细浏览的信息,则作为垃圾信息,直接过滤。
步骤305、若移动终端确定所述第一信息与至少一个预设场景的相关度均大于或等于第一预设值,则移动终端向所述穿戴式设备发送场景上下文信息的请求。
其中,与所述第一信息的相关度大于或等于第一预设值的预设场景成为第一场景,场景上下文信息用于确定当前情境是否为该第一场景,也就是确定第一信息是否适合在当前情境呈现。
步骤306、移动终端从穿戴式设备获取场景上下文信息。
其中,该场景上下文信息用于确定用户的当前情境。
步骤307、移动终端根据所述场景上下文信息计算当前情境与各个预设场景的相似度,并及计算所述第一信息的呈现优先级。
步骤308、当所述优先级大于第二预设值时,则移动终端向所述穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信 息。
所述优先级大于第二预设值时,则可以判断所述信息同用户当前所处情境强相关,即刻推送该信息。
步骤309、当所述优先级小于所述第二预设值时,则移动终端等待预设的时间后,再次执行步骤305~步骤308。
具体地,可以启动定时器,等待一定的时候后再执行步骤305~步骤308,即向所述穿戴式设备发送场景上下文信息的请求,并重新判断所述第一信息在用户当前情境下的呈现优先级。
作为一种可能的实施例,第二预设值可以设置为3,即只有用户仔细阅读的信息,才是同用户当前情境强相关的信息。
步骤310、移动终端接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
对阅读动作信息进行分析,可以获取用户的最新兴趣,使信息与用户场景的匹配更准确。
本实施例,移动终端通过建立用户的场景模型,并据此确定当前接收的信息是否适合于向用户呈现,从而实现根据用户的当前情境来推送信息的方法,减少对用户的骚扰,并提高了信息呈现的有效性;并通过接收用户的阅读动作信息,并根据所述阅读动作信息更新所述场景模型,提高了信息与用户场景匹配的准确度。
图5为本发明信息呈现方法实施例四的流程图,阅读动作分析本实施例的执行主体为穿戴式设备,本实施例的方法可以与图3所示的由移动终端执行的方法相结合,完成针对用户场景的信息呈现。如图5所示,本实施例的方法可以包括:
步骤501、穿戴式设备接收移动终端发送的第一信息。
其中,第一信息为移动终端从网络侧接收,并经移动终端判断与用户当前情境强相关的,即移动终端在确定在当前情境的呈现优先级大于第二预设值后才向穿戴式设备发送该第一信息。所述第一信息可以包括以下任意一种:文本信息、图像信息、音频信息和视频信息。
步骤502、所述穿戴式设备向用户呈现所述第一信息。
在这种实现方式中,穿戴式设备直接呈现来自于移动终端的第一信 息,即将该第一信息是否与当前情境强相关的分析过程全部由移动终端完成,而穿戴式设备仅负责信息的呈现。
可选地,所述方法可以包括:
步骤503、所述穿戴式设备捕获阅读动作信息,并向所述移动终端发送所述阅读动作信息,以使所述移动终端根据所述阅读动作信息更新所述场景模型。
其中,所述阅读动作信息至少包括:是否删除所述第一信息、是否阅读所述第一信息、阅读所述第一信息的时长、是否转发所述第一信息。
本实施例,通过接收移动终端发送的第一信息,并确定所述第一信息与用户当前情境的相关度,当所述相关度大于或等于第三预设值时,所述穿戴式设备向用户呈现所述第一信息,实现根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
进一步地,所述穿戴式设备还可以接收所述移动终端发送的场景上下文信息请求,并向所述移动终端发送场景上下文信息。从而所述移动终端能够根据场景上下文信息确定当前情境属于哪个预设场景,或者当前情境与各个预设场景的相似度。
图6为本发明信息呈现方法实施例五的流程图,本实施例由一个智能设备单独完成从网络侧接收信息、对信息以及用户所处的场景进行分析,以及呈现的整个过程。本实施例中的智能设备可以为移动终端或穿戴式设备,例如可以由移动终端或智能手表单独完成信息呈现的方法。如图6所示,本实施例的方法可以包括:
步骤601、智能设备接收通信网络提供的第一信息。
其中,该第一信息可以是文本信息、图像、音视频信息等各类信息。
步骤602、智能设备确定所述第一信息与至少一个预设场景的相关度。
具体地,智能设备可以解析所述接收的第一信息,并提取相关特征。以文本信息为例,提取的特征可以包括:该信息原作者的特征;该信息原作者与用户的社交关系;内容特征,如:内容包含了哪些词,这些词出现的频度,是否包含某关键词或符号,用户历史上喜爱的信息与该信息的相似度;全局特征,如:全局范围内多少用户也喜爱该信息,有多少其他信息内包含指向该信息的链接。
然后根据一定的算法计算该第一信息与各个预设场景的相关度。预设场景例如可以包括:工作、家中、路上等,可以采用场景模型定义用户在对应特定场景下关注的信息所具有的内容特征。通过计算所述第一信息在各场景模型中的用户评价得分,可以获知该第一信息在各对应预设场景下用户的关注程度,即该第一信息与各个一个预设场景的相关度。
在具体实现时,所述智能设备可以根据信息的特征向量和场景模型矩阵计算所述第一信息与至少一个预设场景的相关度,所述相关度可以为实数。
可选地,在此之前,进一步地,在所述智能设备确定所述第一信息与至少一个预设场景的相关度之前,还包括:
所述智能设备建立场景模型,所述场景模型包括至少一个场景和至少一个信息。在优选的一种实施方式中,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征,其中所述基本场景特征用于表示具体场景,所述信息类别特征用于表示用户的兴趣类别,所述关键词特征用于表示用户的具体兴趣点。
基本场景特征例如为:时间、地理位置、光线等,用于表示具体场景,一般可以为现实世界的场景,如:工作、家中、路上等;信息类别特征例如为:娱乐、体育、财经等,用于表示用户的兴趣类别;关键词特征为从信息中抽取出的关键词本身,用于表示用户更加细粒度的兴趣点,例如:十八大、奥运会等。
用户场景模型的具体参数可以通过机器学习的方式获取。用户场景模型描述了对所述信息提取的内容特征映射到基本场景特征、信息类别特征、关键词特征三个维度后的得分,在具体应用过程中,以矩阵形式 呈现。
可选地,所述立场景模型可以根据历史浏览记录获得,具体地,所述智能设备根据历史浏览记录建立所述场景模型,其中,所述历史浏览记录中的信息根据浏览时间具有相同或不同的权重。例如,可以设置为:距离当前时间近的历史记录具有较高的权重,距离当前时间远的历史记录具有较低的权重。
在训练阶段,可以将获得的历史信息表示成矩阵形式,即建立场景矩阵模型,每条信息中的记录为矩阵中的一行,包括基本场景特征、信息类别特征、关键词特征、信息内容特征、用户对此信息的评分。
在一种实现方式中,用户对信息的评价分值的范围例如设置为1-5分,该评分可以根据用户的阅读动作隐式反馈获取。作为一种实施方式,用户阅读动作同信息评价分值的映射关系可以如下:
空:未获得用户对该信息的评价;
1:用户将该信息置为垃圾信息;
2:该信息未被点击阅读;
3:粗略浏览(例如阅读时间小于1分钟);
4:仔细阅读(例如阅读时间大于3分钟);
5:长时间阅读并转发(例如阅读时间大于3分钟并转发)。
智能设备可以根据如上映射关系,对每条信息给出用户的评分分值,并据此学习得到对应不同场景的用户场景模型,例如第i个预设场景为Si,其对应的场景模型为Ui。
在实际应用中,由于场景矩阵的维度较高,可以使用矩阵分解的技术来对该矩阵进行降维处理,如奇异值分解(SVD)技术可以将高维矩阵转变成(场景-隐状态),(隐状态-信息)两个矩阵的乘积。然后,实际的在线应用系统获得一条新信息后,仍抽取其内容特征,将其转变为向量形式,并分别计算每个场景下对它的评分。
具体地,所述智能设备根据信息的特征向量和场景模型矩阵计算所述第一信息与至少一个预设场景的相关度,所述相关度为实数。
步骤603、若所述第一信息与第一预设场景的相关度大于第一预设值,则所述智能设备在确定用户处于所述第一预设场景时,向用户呈现 所述第一信息。
可选地,步骤603可以包括以下子步骤:
子步骤一、所述智能设备获取场景上下文信息,所述场景上下文信息用于确定用户的当前情境;
子步骤二、所述智能设备根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
子步骤三、所述智能设备根据所述当前情境与各个预设场景的相似度,以及所述第一信息与各个预设场景的相关度计算所述第一信息在当前情境下的呈现优先级,当所述优先级大于第二预设值时,则向用户呈现所述第一信息。
具体实现时,可以采用上述的场景矩阵根据当前情境的上下文信息,计算当前情境与各个预设场景的相似度,并预测当前情境最符合的场景。
同时,分别计算所述信息对应各场景模型{Ui}的评分,如果所述第一信息与第一预设场景的相关度大于第一预设值,则说明该第一信息至少在一个场景(称为第一场景)下是用户关注的内容,则判断当前情境是否为第一场景,可以通过计算第一信息的呈现优先级的方式确定。
具体地,将所述第一信息的呈现优先级同第二预设值比较,如果大于第二预设值,则判断所述第一信息同用户当前所处情境强相关,即刻推送该信息,
可选地,当所述优先级小于所述第二预设值时,则可以等待预设的时间后再次执行步骤603。
具体地,对于子步骤一,智能设备获取场景上下文信息,可以通过管理一个或多个上下文数据源获取,其中,一个或多个上下文数据源包括但不限于各类传感器、社交媒体记录、应用日志等,用以获取用户当前环境和用户行为的上下文信息。获取各类上下文信息的途径包括但不限于:通过本地装置时钟和/或时间服务器获得时间信息;通过GPS和/或蜂窝三角测量获得地理位置信息;通过麦克风检测当前环境是否嘈杂;通过光线传感器检测环境光线的强弱;通过运动传感器检测用户是否运动;通过社交媒体记录标记用户的行为活动;通过应用日志,如电子邮 件、联系人、日历等获取用户的日程安排等信息。
对于子步骤二,智能设备获取到上下文信息之后,可以在各个维度上分别计算用户当前所处情境与各场景模型{Ui}对应场景{Si}的相似度值。
对于子步骤三,第一信息的呈现优先级P的计算方式可以如下:
P=w1*V1+w2*V2+…wj*Vj
其中,Vi为所述信息在场景模型Ui中的得分;wi为用户当前情境与场景模型Ui对应场景Si的相似度值。
进一步地,在所述智能设备确定所述第一信息与至少一个预设场景的相关度之后,还可以包括如下可选的步骤604:
步骤604、若确定所述第一信息与所有预设场景的相关度均小于第一预设值,则将所述第一信息作为垃圾信息。
具体地,在上述计算第一信息预设场景的相关度时,如果计算得到所述信息的所有评分都低于第一预设值,则判断所述信息为垃圾信息;否则,保存该评分集合{Vi}。在一种可能的实施例中,第一预设值可以设置为3,即对应上述用户阅读动作中的“粗略浏览”,即对于用户根本不会浏览的信息,则作为垃圾信息,直接过滤。
进一步地,在所述智能设备向用户呈现所述第一信息之后,还可以包括如下可选的步骤605:
步骤605、所述智能设备捕获用户的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
具体地,所述智能设备捕获用户阅读动作信息,作为用户对当前的场景模型的隐式反馈,并解析所述信息的内容特征,据此更新用户场景模型。常见的需要捕捉的用户的阅读动作可以是:用户是否将该信息置为垃圾信息,用户是否点击阅读该信息,用户粗略或仔细浏览该信息,用户长时间阅读并转发该信息等。这些动作作为隐式反馈,可以用于获取用户对该信息的评分情况,从而将所述信息作为更新的语料,用于更新用户场景模型。特别的,对于用户特别关注的内容,如用户长时间阅读并转发的内容(用户隐式打分为5分),可获取所述阅读内容的关键词,精确提取用户关注内容的具体事件,从而更新用户场景模型的关键词特 征。
根据捕获到的用户动作反馈及内容特征提取,更新对应的用户场景模型。即,将最新获得的用户阅读记录添加到训练语料中,重新计算的用户场景矩阵。具体的,模型的更新学习可以包括如下两种方法:1)固定训练语料的大小,不断添加新记录,并删除最老的记录;2)按照时间的新旧顺序,为每条记录分配不同的权重。
本实施例,通过智能设备接收通信网络提供的第一信息后,确定所述第一信息与至少一个预设场景的相关度,若所述第一信息与第一预设场景的相关度大于第一预设值,则所述智能设备在确定用户处于所述第一预设场景时,向用户呈现所述第一信息。即,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
图7为本发明信息呈现方法实施例六的流程图,本实施例由一个智能设备单独完成从网络侧接收信息、对信息以及用户所处的场景进行分析,以及呈现的整个过程。本实施例中的智能设备可以为移动终端或穿戴式设备,例如可以由移动终端或智能手表单独完成信息呈现的方法。本实施例与图6所示实施例的区别在于,本实施例中,智能设备在接收第一信息之后直接判断该第一信息的优先级,而不是先确定第一信息是否为垃圾信息,再判断是否适合在当前呈现。如图7所示,本实施例的方法可以包括:
步骤701、智能设备接收通信网络提供的第一信息。
其中,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息。
步骤702、所述智能设备确定所述第一信息在当前情境下的呈现优先级。
具体地,确定所述第一信息在当前情境的呈现优先级的方法可以有多种,例如,可以先确定该第一信息对应的预设场景,再根据场景上下文确定用户的当前情境是否符合该第一信息对应的预设场景,若是,则 确定所述第一信息在当前情境的呈现优先级较高,反之,则所述第一信息在当前情境的呈现优先级较低。又例如,可以先获取场景上下文信息;根据所述场景上下文信息计算当前情境与各个预设场景的相似度,并计算所述第一信息与所述预设场景的相关度;然后,根据所述相似度和所述第一信息与所述预设场景的相关度计算所述第一信息的呈现优先级。以上仅为确定呈现优先级的示例,本发明实施例并不限于此。
步骤703、当所述第一信息在当前情境的呈现优先级大于或等于第二预设值时,所述智能设备向用户呈现所述第一信息。
相应地,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次执行步骤702和步骤703。具体地,可以等待预设时间后再次获取场景上下文信息;根据所述场景上下文信息计算当前情境与各个预设场景的相似度;根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。直到所述第一信息在当前情境的呈现优先级大于或等于第二预设值时,所述智能设备向用户呈现所述第一信息。
进一步地,在此之后,智能设备还可以捕获用户的阅读动作信息,并将该用户阅读动作信息发送给移动终端,移动终端接收后可以解析所述信息的内容特征,据此更新用户场景模型。
常见的需要捕捉的用户的阅读动作可以是:用户是否将该信息置为垃圾信息,用户是否点击阅读该信息,用户粗略或仔细浏览该信息,用户长时间阅读并转发该信息等。这些动作作为隐式反馈,可以用于获取用户对该信息的评分情况,从而将所述信息作为更新的语料,用于更新用户场景模型。特别的,对于用户特别关注的内容,如用户长时间阅读并转发的内容(用户隐式打分为5分),可获取所述阅读内容的关键词,精确提取用户关注内容的具体事件,从而更新用户场景模型的关键词特征。
更新用户场景模型,即为将最新获得的用户阅读记录添加到训练语料中,重新计算的用户场景矩阵。具体的,模型的更新学习可以包括如下两种方法:固定训练语料的大小,不断添加新记录,并删除最老的记录;或者,按照时间的新旧顺序,为每条记录分配不同的权重。
本实施例,通过智能设备接收通信网络提供的第一信息后,确定所述第一信息在当前情境的呈现优先级,向用户呈现所述第一信息。即,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
图8为本发明信息筛选装置实施例一的结构示意图,如图8所示,本实施例的装置800可以包括:接收模块801、处理模块802和发送模块803,其中,
接收模块801,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
处理模块802,用于确定所述第一信息在用户当前情境下的呈现优先级;
发送模块803,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
可选地,所述处理模块802具体可以用于:
向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;根据所述相关度确定所述第一信息在当前情境的呈现优先级。
可选地,所述处理模块802,还可以用于:
当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次执行所述确定所述第一信息在当前情境的呈现优先级的步骤,即等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
可选地,所述处理模块802还用于:
建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
可选地,所述处理模块802具体用于:
解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与所述预设场景的相关度。
可选地,所述处理模块802具体用于:
根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
可选地,所述接收模块801还可以用于:
在所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
本实施例的装置,可以与图10所示的信息呈现装置相配合,用于执行图1所示方法实施例的技术方案,其实现原理类似,此处不再赘述。
本实施例的装置,通过移动终端接收通信网络提供的第一信息后,确定所述第一信息在当前情境的呈现优先级,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。即,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
图9为本发明信息筛选装置实施例二的结构示意图,如图9所示,本实施例的装置900可以包括:接收模块901、处理模块902和发送模块903,其中,
接收模块901,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
处理模块902,用于确定所述第一信息与至少一个预设场景的相关度;当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,确定所述第一信息在用户当前情境下的呈现优先级;
发送模块903,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
可选地,所述处理模块902,还可以用于:
在确定所述第一信息与至少一个预设场景的相关度之后,当所述第一信息与所有预设场景的相关度均小于第一预设值时,将所述第一信息作为垃圾信息。
可选地,所述处理模块902具体用于:
向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
可选地,所述处理模块902还用于:
在确定所述第一信息在当前情境的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次执行确定所述第一信息在当前情境的呈现优先级的步骤。即,等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
可选地,所述处理模块902还用于:
在所述移动终端确定所述第一信息与至少一个预设场景的相关度之前,建立场景模型,所述场景模型用于确定所述第一信息与预设场景的 相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
可选地,所述处理模块902具体用于:
解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
可选地,所述处理模块902具体用于:
根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
可选地,所述接收模块901还用于:
在所述当所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
本实施例的装置,可以与图10或图11所示的信息呈现装置相配合,用于执行图2或图3所示方法实施例的技术方案,其实现原理类似,此处不再赘述。
本实施例,通过移动终端接收通信网络提供的第一信息后,确定所述第一信息与至少一个预设场景的相关度,若所述第一信息与第一预设场景的相关度大于第一预设值,则所述移动终端在确定用户处于所述第一预设场景时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。即,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
图10为本发明信息呈现装置实施例一的结构示意图,如图10所示,本实施例的装置1000可以包括:接收模块1001和呈现模块1002,其中,
接收模块1001,用于接收移动终端在确定在当前情境的呈现优先级 大于第二预设值后发送的第一信息,所所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
呈现模块1002,用于向用户呈现所述第一信息。
本实施例的装置,可以与图8或图9所示的信息筛选装置相配合,用于执行图1、图2或图3所示方法实施例的技术方案以及图5所示方法实施例的技术方案,其实现原理类似,此处不再赘述。
本实施例,通过接收移动终端筛选后的第一信息,由于所述第一信息与至少一个预设场景的相关度,若所述第一信息与第一预设场景的相关度大于第一预设值,因此能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
图11为本发明信息呈现装置实施例二的结构示意图,如图11所示,本实施例的装置1100在图10所示装置的基础上,还可以包括:捕获模块1003和发送模块1004,其中,
可选地,捕获模块1003,可以用于在所述呈现模块1002向用户呈现所述第一信息之后,捕获阅读动作信息,所述阅读动作信息至少包括:是否删除所述第一信息、是否阅读所述第一信息、阅读所述第一信息的时长、是否转发所述第一信息;发送模块1004,可以用于向所述移动终端发送所述阅读动作信息,以使所述移动终端根据所述阅读动作信息更新所述场景模型。
可选地,所述接收模块1001,还用于接收所述移动终端发送的场景上下文信息请求;所述发送模块1004,还用于向所述移动终端发送场景上下文信息。
需要说明的是,上述两种可选方式互相独立,并无依赖关系。
可选地,所述呈现模块1002具体可以用于:
发出提示信息。
本实施例的装置,可以与图8或图9所示的信息筛选装置相配合,用于执行图1、图2或图3所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
图12为本发明信息呈现装置实施例三的结构示意图,本实施例的信息呈现装置1200能够单独完成信息的接收、筛选和呈现过程。如图12所示,本实施例的装置可以包括:接收模块1201、处理模块1202和呈现模块1203,其中,
接收模块1201,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
处理模块1202,用于确定所述第一信息与至少一个预设场景的相关度;当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,确定所述第一信息在当前情境下的呈现优先级;
呈现模块1203,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向用户呈现所述第一信息。
可选地,所述处理模块1202还用于:
在所述移动终端确定所述第一信息与至少一个预设场景的相关度之后,当所述第一信息与所有预设场景的相关度均小于第一预设值时,将所述第一信息作为垃圾信息。
可选地,所述处理模块1202具体用于:
获取场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
可选地,所述处理模块1202具体用于:
在确定所述第一信息在当前情境的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次执行确定所述第一信息在当前情境的呈现优先级的步骤。即,等待预设的时间后再次获取场景上下文信息;根据所述场景上下文信息计算当前情境与各个预设场景的相似度;根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
可选地,所述处理模块1202还可以用于:
在确定所述第一信息与至少一个预设场景的相关度之前,建立场景 模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
可选地,所述处理模块1202具体用于:
解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
可选地,所述处理模块1202具体用于:
根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
可选地,所述呈现模块1203具体用于:
发出提示信息。
本实施例的装置,可以用于执行图6所示方法实施例的技术方案,其实现原理类似,此处不再赘述。
本实施例,通过接收移动终端筛选后的第一信息,由于所述第一信息与至少一个预设场景的相关度,若所述第一信息与第一预设场景的相关度大于第一预设值,因此能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
图13为本发明信息呈现装置实施例四的结构示意图,如图13所示,本实施例的装置1300在图12所示装置的基础上,还可以包括:捕获模块1204,
所述捕获模块1204,可以用于当所述第一信息在当前情境的呈现优先级大于第二预设值时,向用户呈现所述第一信息之后,捕获用户的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
图14为本发明信息呈现装置实施例五的结构示意图,本实施例的信息呈现装置能够单独完成信息的接收、筛选和呈现过程。如图14所示, 本实施例的装置1400可以包括:接收模块1401、处理模块1402和呈现模块1403,其中,
接收模块1401,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
处理模块1402,用于确定所述第一信息在当前情境下的呈现优先级;
呈现模块1403,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向用户呈现所述第一信息。
可选地,所述处理模块1402具体可以用于:
获取场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
可选地,所述处理模块1402具体可以用于:
在确定所述第一信息在当前情境的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
本实施例的装置,可以用于执行图7所示方法实施例的技术方案,其实现原理类似,此处不再赘述。
本实施例,通过接收通信网络提供的第一信息后,确定所述第一信息在当前情境的呈现优先级,向用户呈现所述第一信息。即,能够根据用户当前所处情境,只推送或呈现重要紧急或同用户当前情境强相关的信息,因此,可以减少同当前情境无关的信息对用户的干扰,并且由于所呈现的信息是用户需要的,能够提高用户认真阅读所述信息的可能性,能够改善呈现效果。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部 分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (62)

  1. 一种信息呈现方法,其特征在于,包括:
    移动终端接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    所述移动终端确定所述第一信息在用户当前情境下的呈现优先级;
    当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
  2. 根据权利要求1所述的方法,其特征在于,所述移动终端确定所述第一信息在当前情境下的呈现优先级,包括:
    所述移动终端向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    所述移动终端根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
    所述移动终端根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
  3. 根据权利要求1或2所述的方法,其特征在于,在所述移动终端计算所述第一信息在当前情境的呈现优先级之后,还包括:
    当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    所述移动终端根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
    所述移动终端根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
  4. 根据权利要求1~3中任一项所述的方法,其特征在于,在所述移动终端确定所述第一信息在当前情境的呈现优先级之前,还包括:
    所述移动终端建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
  5. 根据权利要求4所述的方法,其特征在于,所述移动终端确定所述第一信息与所述预设场景的相关度,包括:
    所述移动终端解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与所述预设场景的相关度。
  6. 根据权利要求4或5所述的方法,其特征在于,所述移动终端建立场景模型,包括:
    所述移动终端根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
  7. 根据权利要求4~6中任一项所述的方法,其特征在于,在所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,还包括:
    所述移动终端接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
  8. 一种信息呈现方法,其特征在于,包括:
    移动终端接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    所述移动终端确定所述第一信息与至少一个预设场景的相关度;
    当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,所述移动终端确定所述第一信息在用户当前情境下的呈现优先级;
    当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
  9. 根据权利要求8所述的方法,其特征在于,在所述移动终端确定所述第一信息与至少一个预设场景的相关度之后,还包括:
    当所述第一信息与所有预设场景的相关度均小于第一预设值时,所 述移动终端将所述第一信息作为垃圾信息。
  10. 根据权利要求8或9所述的方法,其特征在于,所述移动终端确定所述第一信息在当前情境的呈现优先级,包括:
    所述移动终端向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    所述移动终端根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
    所述移动终端根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
  11. 根据权利要求8~10中任一项所述的方法,其特征在于,在所述移动终端计算所述第一信息在当前情境下的呈现优先级之后,还包括:
    当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    所述移动终端根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
    所述移动终端根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
  12. 根据权利要求8~11中任一项所述的方法,其特征在于,在所述移动终端确定所述第一信息与至少一个预设场景的相关度之前,还包括:
    所述移动终端建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
  13. 根据权利要求12所述的方法,其特征在于,所述移动终端确定所述第一信息与至少一个预设场景的相关度,包括:
    所述移动终端解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
  14. 根据权利要求12或13所述的方法,其特征在于,所述移动终端建立场景模型,包括:
    所述移动终端根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
  15. 根据权利要求12~14中任一项所述的方法,其特征在于,在所述当所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,还包括:
    所述移动终端接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
  16. 一种信息呈现方法,其特征在于,包括:
    穿戴式设备接收移动终端在确定在当前情境的呈现优先级大于第二预设值后发送的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    所述穿戴式设备向用户呈现所述第一信息。
  17. 根据权利要求16所述的方法,其特征在于,在所述穿戴式设备向用户呈现所述第一信息之后,还包括:
    所述穿戴式设备捕获阅读动作信息,所述阅读动作信息至少包括:是否删除所述第一信息、是否阅读所述第一信息、阅读所述第一信息的时长、是否转发所述第一信息;
    所述穿戴式设备向所述移动终端发送所述阅读动作信息,以使所述移动终端根据所述阅读动作信息更新所述场景模型。
  18. 根据权利要求16或17所述的方法,其特征在于,还包括:
    所述穿戴式设备接收所述移动终端发送的场景上下文信息请求;
    所述穿戴式设备向所述移动终端发送场景上下文信息。
  19. 根据权利要求16~18中任一项所述的方法,其特征在于,所述穿戴式设备向用户呈现所述第一信息,包括:
    所述穿戴式设备发出提示信息。
  20. 一种信息呈现方法,其特征在于,包括:
    智能设备接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    所述智能设备确定所述第一信息与至少一个预设场景的相关度;
    当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,所述智能设备确定所述第一信息在当前情境下的呈现优先级;
    当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,所述智能设备向用户呈现所述第一信息。
  21. 根据权利要求20所述的方法,其特征在于,在所述智能设备确定所述第一信息与至少一个预设场景的相关度之后,还包括:
    当所述第一信息与所有预设场景的相关度均小于第一预设值时,所述智能设备将所述第一信息作为垃圾信息。
  22. 根据权利要求20或21所述的方法,其特征在于,所述智能设备确定所述第一信息在当前情境的呈现优先级,包括:
    所述智能设备获取场景上下文信息,所述场景上下文信息用于确定用户的当前情境;
    所述智能设备根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
    所述智能设备根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
  23. 根据权利要求20~22中任一项所述的方法,其特征在于,在所述智能设备确定所述第一信息在当前情境的呈现优先级之后,还包括:
    当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
    所述智能设备根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
    所述智能设备根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
  24. 根据权利要求20~23中任一项所述的方法,其特征在于,在所述智能设备确定所述第一信息与至少一个预设场景的相关度之前,还包 括:
    所述智能设备建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
  25. 根据权利要求24所述的方法,其特征在于,所述智能设备确定所述第一信息与至少一个预设场景的相关度,包括:
    所述智能设备解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
  26. 根据权利要求24或25所述的方法,其特征在于,所述智能设备建立场景模型,包括:
    所述智能设备根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
  27. 根据权利要求24~26中任一项所述的方法,其特征在于,在所述当所述第一信息在当前情境的呈现优先级大于第二预设值时,向用户呈现所述第一信息之后,还包括:
    所述智能设备捕获用户的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
  28. 根据权利要求20~27中任一项所述的方法,其特征在于,所述智能设备向用户呈现所述第一信息,包括:
    所述智能设备发出提示信息。
  29. 一种信息呈现方法,其特征在于,包括:
    智能设备接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    所述智能设备确定所述第一信息在当前情境下的呈现优先级;
    当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,所述智能设备向用户呈现所述第一信息。
  30. 根据权利要求29所述的方法,其特征在于,所述智能设备确定 所述第一信息在当前情境下的呈现优先级,包括:
    所述智能设备获取场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    所述智能设备根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
    所述智能设备根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
  31. 根据权利要求29或30所述的方法,其特征在于,在所述智能设备确定所述第一信息在当前情境下的呈现优先级之后,还包括:
    当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
    所述智能设备根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
    所述智能设备根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
  32. 一种信息筛选装置,其特征在于,包括:
    接收模块,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    处理模块,用于确定所述第一信息在用户当前情境下的呈现优先级;
    发送模块,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
  33. 根据权利要求32所述的装置,其特征在于,所述处理模块具体用于:
    向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
  34. 根据权利要求32或33所述的装置,其特征在于,所述处理模块,还用于:
    当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
    根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
  35. 根据权利要求32~34中任一项所述的装置,其特征在于,所述处理模块还用于:
    建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
  36. 根据权利要求35所述的装置,其特征在于,所述处理模块具体用于:
    解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与所述预设场景的相关度。
  37. 根据权利要求35或36所述的装置,其特征在于,所述处理模块具体用于:
    根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
  38. 根据权利要求35~37中任一项所述的装置,其特征在于,所述接收模块还用于:
    在所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
  39. 一种信息筛选装置,其特征在于,包括:
    接收模块,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    处理模块,用于确定所述第一信息与至少一个预设场景的相关度;当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,确定所述第一信息在用户当前情境下的呈现优先级;
    发送模块,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息。
  40. 根据权利要求39所述的装置,其特征在于,所述处理模块,还用于:
    在确定所述第一信息与至少一个预设场景的相关度之后,当所述第一信息与所有预设场景的相关度均小于第一预设值时,将所述第一信息作为垃圾信息。
  41. 根据权利要求39或40所述的装置,其特征在于,所述处理模块具体用于:
    向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
    根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
  42. 根据权利要求39~41中任一项所述的装置,其特征在于,所述处理模块还用于:
    在确定所述第一信息在当前情境下的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次向所述穿戴式设备获取所述穿戴式设备的场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
    根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
  43. 根据权利要求39~42中任一项所述的装置,其特征在于,所述处理模块还用于:
    在确定所述第一信息与至少一个预设场景的相关度之前,建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
  44. 根据权利要求43所述的装置,其特征在于,所述处理模块具体用于:
    解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
  45. 根据权利要求43或44所述的装置,其特征在于,所述处理模块具体用于:
    根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
  46. 根据权利要求43~45中任一项所述的装置,其特征在于,所述接收模块还用于:
    在所述当所述第一信息在当前情境的呈现优先级大于第二预设值时,向穿戴式设备发送所述第一信息,以使所述穿戴式设备向用户呈现所述第一信息之后,接收所述穿戴式设备发送的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
  47. 一种信息呈现装置,其特征在于,包括:
    接收模块,用于接收移动终端在确定在当前情境的呈现优先级大于第二预设值后发送的第一信息,所所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    呈现模块,用于向用户呈现所述第一信息。
  48. 根据权利要求47所述的装置,其特征在于,还包括:
    捕获模块,用于在所述呈现模块向用户呈现所述第一信息之后,捕 获阅读动作信息,所述阅读动作信息至少包括:是否删除所述第一信息、是否阅读所述第一信息、阅读所述第一信息的时长、是否转发所述第一信息;
    发送模块,用于向所述移动终端发送所述阅读动作信息,以使所述移动终端根据所述阅读动作信息更新所述场景模型。
  49. 根据权利要求48所述的装置,其特征在于:
    所述接收模块,还用于接收所述移动终端发送的场景上下文信息请求;
    所述发送模块,还用于向所述移动终端发送场景上下文信息。
  50. 根据权利要求47~49中任一项所述的装置,其特征在于,所述呈现模块具体用于:
    发出提示信息。
  51. 一种信息呈现装置,其特征在于,包括:
    接收模块,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    处理模块,用于确定所述第一信息与至少一个预设场景的相关度;
    当所述第一信息与至少一个所述预设场景的相关度大于或等于第一预设值时,确定所述第一信息在当前情境下的呈现优先级;
    呈现模块,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向用户呈现所述第一信息。
  52. 根据权利要求51所述的装置,其特征在于,所述处理模块还用于:
    在确定所述第一信息与至少一个预设场景的相关度之后,当所述第一信息与所有预设场景的相关度均小于第一预设值时,将所述第一信息作为垃圾信息。
  53. 根据权利要求51或52所述的装置,其特征在于,所述处理模块具体用于:
    获取场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
    根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
  54. 根据权利要求51~53中任一项所述的装置,其特征在于,所述处理模块具体用于:
    在确定所述第一信息在当前情境的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
    根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
    根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
  55. 根据权利要求51~54中任一项所述的装置,其特征在于,所述处理模块还用于:
    在确定所述第一信息与至少一个预设场景的相关度之前,建立场景模型,所述场景模型用于确定所述第一信息与预设场景的相关度,所述场景模型包括以下至少3类特征:基本场景特征、信息类别特征、关键词特征。
  56. 根据权利要求55所述的装置,其特征在于,所述处理模块具体用于:
    解析所述第一信息的特征,并根据所述场景模型计算所述第一信息与至少一个预设场景的相关度。
  57. 根据权利要求55或56中任一项所述的装置,其特征在于,所述处理模块具体用于:
    根据所述第一信息的历史浏览记录建立所述场景模型,所述第一信息的历史浏览记录包括:记录时间、接收所述第一信息时的基本场景特征、所述第一信息的信息类别特征、所述第一信息的关键词特征、以及所述用户的阅读动作信息,其中,不同记录时间所对应的历史浏览记录具有相同或不同的权重。
  58. 根据权利要求55~57中任一项所述的装置,其特征在于,还包括:
    捕获模块,用于当所述第一信息在当前情境下的呈现优先级大于第 二预设值时,向用户呈现所述第一信息之后,捕获用户的阅读动作信息,并根据所述阅读动作信息更新所述场景模型。
  59. 根据权利要求51~57中任一项所述的装置,其特征在于,所述呈现模块具体用于:
    发出提示信息。
  60. 一种信息呈现装置,其特征在于,包括:
    接收模块,用于接收通信网络提供的第一信息,所述第一信息包括以下任意一种:文本信息、图像信息、音频信息和视频信息;
    处理模块,用于确定所述第一信息在当前情境下的呈现优先级;
    呈现模块,用于当所述第一信息在当前情境下的呈现优先级大于或等于第二预设值时,向用户呈现所述第一信息。
  61. 根据权利要求60所述的装置,其特征在于,所述处理模块具体用于:
    获取场景上下文信息,所述场景上下文信息用于确定所述用户的当前情境;
    根据所述场景上下文信息计算同当前情境匹配的预设场景,并计算所述第一信息与所述预设场景的相关度;
    根据所述相关度确定所述第一信息在当前情境下的呈现优先级。
  62. 根据权利要求60或61所述的装置,其特征在于,所述处理模块具体用于:
    在确定所述第一信息在当前情境的呈现优先级之后,当所述第一信息在当前情境下的呈现优先级小于所述第二预设值时,则等待预设的时间后再次获取场景上下文信息;
    根据所述场景上下文信息计算当前情境与各个预设场景的相似度;
    根据所述相似度和所述第一信息与预设场景的相关度计算所述第一信息的呈现优先级。
PCT/CN2014/088709 2014-05-07 2014-10-16 信息呈现方法和设备 WO2015169056A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/330,850 US10291767B2 (en) 2014-05-07 2016-11-07 Information presentation method and device
US16/404,449 US11153430B2 (en) 2014-05-07 2019-05-06 Information presentation method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410191534.7 2014-05-07
CN201410191534.7A CN103970861B (zh) 2014-05-07 2014-05-07 信息呈现方法和设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/330,850 Continuation US10291767B2 (en) 2014-05-07 2016-11-07 Information presentation method and device

Publications (1)

Publication Number Publication Date
WO2015169056A1 true WO2015169056A1 (zh) 2015-11-12

Family

ID=51240358

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/088709 WO2015169056A1 (zh) 2014-05-07 2014-10-16 信息呈现方法和设备

Country Status (3)

Country Link
US (2) US10291767B2 (zh)
CN (1) CN103970861B (zh)
WO (1) WO2015169056A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103970861B (zh) * 2014-05-07 2017-11-17 华为技术有限公司 信息呈现方法和设备
CN104853043A (zh) * 2015-05-07 2015-08-19 腾讯科技(深圳)有限公司 通知消息的过滤和控制方法、智能手机及系统
CN105306672A (zh) * 2015-08-25 2016-02-03 小米科技有限责任公司 提醒操作的处理方法、装置和设备
CN106559449B (zh) * 2015-09-28 2019-12-17 百度在线网络技术(北京)有限公司 向智能可穿戴设备推送通知以及展示通知的方法和装置
US10685029B2 (en) * 2015-11-23 2020-06-16 Google Llc Information ranking based on properties of a computing device
CN107395689A (zh) * 2017-06-28 2017-11-24 湖南统科技有限公司 消防信息的分类推送方法及系统
CN107995095A (zh) * 2017-11-09 2018-05-04 用友网络科技股份有限公司 基于移动端勿扰模式下消息提醒的方法
CN108322594B (zh) * 2017-12-22 2020-12-18 广东源泉科技有限公司 一种终端控制方法、终端及计算机可读存储介质
CN110245143A (zh) * 2019-07-18 2019-09-17 王东 调香方法、扩香机、移动终端、云端服务器和电子设备
CN112910754A (zh) * 2020-05-07 2021-06-04 腾讯科技(深圳)有限公司 基于群组会话的消息处理方法、装置、设备及存储介质
CN112598377A (zh) * 2020-12-16 2021-04-02 长沙市到家悠享网络科技有限公司 事项提醒方法、系统、装置、设备及存储介质
CN114500442B (zh) * 2021-08-30 2023-03-03 荣耀终端有限公司 消息管理方法和电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103415039A (zh) * 2013-07-10 2013-11-27 上海新储集成电路有限公司 一种信息定制化的提醒系统及方法
CN103731253A (zh) * 2013-12-20 2014-04-16 上海华勤通讯技术有限公司 通信设备和与其配对的穿戴式设备的同步方法及通信系统
CN103970861A (zh) * 2014-05-07 2014-08-06 华为技术有限公司 信息呈现方法和设备

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7107539B2 (en) * 1998-12-18 2006-09-12 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US8701027B2 (en) * 2000-03-16 2014-04-15 Microsoft Corporation Scope user interface for displaying the priorities and properties of multiple informational items
US20070197195A1 (en) * 2005-01-13 2007-08-23 Keiji Sugiyama Information notification controller, information notification system, and program
GB0508468D0 (en) * 2005-04-26 2005-06-01 Ramakrishna Madhusudana Method and system providing data in dependence on keywords in electronic messages
WO2012037725A1 (en) * 2010-09-21 2012-03-29 Nokia Corporation Method and apparatus for collaborative context recognition
US8560678B2 (en) * 2010-12-22 2013-10-15 Facebook, Inc. Providing relevant notifications based on common interests between friends in a social networking system
JP5021821B2 (ja) * 2011-01-07 2012-09-12 株式会社エヌ・ティ・ティ・ドコモ 移動通信方法、移動管理ノード及びサービングゲートウェイ装置
US20130084805A1 (en) * 2011-10-04 2013-04-04 Research In Motion Limited Orientation Determination For A Mobile Device
US20130124327A1 (en) * 2011-11-11 2013-05-16 Jumptap, Inc. Identifying a same user of multiple communication devices based on web page visits
US9189252B2 (en) * 2011-12-30 2015-11-17 Microsoft Technology Licensing, Llc Context-based device action prediction
CN103248658B (zh) * 2012-02-10 2016-04-13 富士通株式会社 服务推荐装置、服务推荐方法和移动设备
CN103259825A (zh) * 2012-02-21 2013-08-21 腾讯科技(深圳)有限公司 消息推送方法和装置
US20170140392A9 (en) * 2012-02-24 2017-05-18 Strategic Communication Advisors, Llc System and method for assessing and ranking newsworthiness
US9047620B2 (en) * 2012-03-21 2015-06-02 Google Inc. Expected activity of a user
US9558507B2 (en) * 2012-06-11 2017-01-31 Retailmenot, Inc. Reminding users of offers
US20140046976A1 (en) * 2012-08-11 2014-02-13 Guangsheng Zhang Systems, methods, and user interface for effectively presenting information
US9015099B2 (en) * 2012-08-14 2015-04-21 Sri International Method, system and device for inferring a mobile user's current context and proactively providing assistance
US20150277572A1 (en) * 2012-10-24 2015-10-01 Intel Corporation Smart contextual display for a wearable device
US9152211B2 (en) * 2012-10-30 2015-10-06 Google Technology Holdings LLC Electronic device with enhanced notifications
CN103106259B (zh) 2013-01-25 2016-01-20 西北工业大学 一种基于情境的移动网页内容推荐方法
US8818341B2 (en) * 2013-01-25 2014-08-26 Google Inc. Wristwatch notification for late trains
US20140289259A1 (en) * 2013-03-20 2014-09-25 Microsoft Corporation Social Cue Based Electronic Communication Ranking
CN103186677A (zh) * 2013-04-15 2013-07-03 北京百纳威尔科技有限公司 信息显示方法及装置
CN103577544B (zh) * 2013-10-11 2017-07-07 北京百度网讯科技有限公司 一种用于提供待发送信息的方法及装置
US9344837B2 (en) * 2013-10-14 2016-05-17 Google Technology Holdings LLC Methods and devices for path-loss estimation
US9607319B2 (en) * 2013-12-30 2017-03-28 Adtile Technologies, Inc. Motion and gesture-based mobile advertising activation
US9880711B2 (en) * 2014-01-22 2018-01-30 Google Llc Adaptive alert duration
US9699214B2 (en) * 2014-02-10 2017-07-04 Sequitur Labs Inc. System for policy-managed content presentation
US10469428B2 (en) * 2014-02-21 2019-11-05 Samsung Electronics Co., Ltd. Apparatus and method for transmitting message
US20150286391A1 (en) * 2014-04-08 2015-10-08 Olio Devices, Inc. System and method for smart watch navigation
KR102232419B1 (ko) * 2014-08-26 2021-03-26 엘지전자 주식회사 웨어러블 디스플레이 디바이스 및 그 제어 방법
US10372774B2 (en) * 2014-08-29 2019-08-06 Microsoft Technology Licensing, Llc Anticipatory contextual notifications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103415039A (zh) * 2013-07-10 2013-11-27 上海新储集成电路有限公司 一种信息定制化的提醒系统及方法
CN103731253A (zh) * 2013-12-20 2014-04-16 上海华勤通讯技术有限公司 通信设备和与其配对的穿戴式设备的同步方法及通信系统
CN103970861A (zh) * 2014-05-07 2014-08-06 华为技术有限公司 信息呈现方法和设备

Also Published As

Publication number Publication date
CN103970861A (zh) 2014-08-06
US20170064070A1 (en) 2017-03-02
US11153430B2 (en) 2021-10-19
CN103970861B (zh) 2017-11-17
US10291767B2 (en) 2019-05-14
US20190327357A1 (en) 2019-10-24

Similar Documents

Publication Publication Date Title
WO2015169056A1 (zh) 信息呈现方法和设备
US11088977B1 (en) Automated image processing and content curation
US11128582B2 (en) Emoji recommendation method and apparatus
CN109155136B (zh) 从视频自动检测和渲染精彩场面的计算机化系统和方法
US10783206B2 (en) Method and system for recommending text content, and storage medium
KR101959368B1 (ko) 사용자 디바이스의 활성 페르소나 결정
CN107797984B (zh) 智能交互方法、设备及存储介质
CN106708282B (zh) 一种推荐方法和装置、一种用于推荐的装置
JP5156879B1 (ja) 情報提示制御装置及び情報提示制御方法
US8972498B2 (en) Mobile-based realtime location-sensitive social event engine
US9754284B2 (en) System and method for event triggered search results
KR20200145861A (ko) 콘텐츠 모음 네비게이션 및 오토포워딩
CN108062390B (zh) 推荐用户的方法、装置和可读存储介质
US11297027B1 (en) Automated image processing and insight presentation
US20190139063A1 (en) Methodology of analyzing incidence and behavior of customer personas among users of digital environments
CN104462051A (zh) 分词方法及装置
US20150154287A1 (en) Method for providing recommend information for mobile terminal browser and system using the same
CN109451334B (zh) 用户画像生成处理方法、装置及电子设备
CN113626624B (zh) 一种资源识别方法和相关装置
CN111294620A (zh) 视频的推荐方法及装置
US10891303B2 (en) System and method for editing dynamically aggregated data
US20200403955A1 (en) Systems and methods to prioritize chat rooms using machine learning
CN111027495A (zh) 用于检测人体关键点的方法和装置
CN111625690A (zh) 一种对象推荐方法、装置、设备及介质
CN113254503B (zh) 一种内容挖掘方法、装置及相关产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14891342

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14891342

Country of ref document: EP

Kind code of ref document: A1