WO2020207297A1 - 信息处理方法、存储介质及电子设备 - Google Patents

信息处理方法、存储介质及电子设备 Download PDF

Info

Publication number
WO2020207297A1
WO2020207297A1 PCT/CN2020/082465 CN2020082465W WO2020207297A1 WO 2020207297 A1 WO2020207297 A1 WO 2020207297A1 CN 2020082465 W CN2020082465 W CN 2020082465W WO 2020207297 A1 WO2020207297 A1 WO 2020207297A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
matching degree
interest
application
user
Prior art date
Application number
PCT/CN2020/082465
Other languages
English (en)
French (fr)
Inventor
陈仲铭
何明
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020207297A1 publication Critical patent/WO2020207297A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Definitions

  • This application relates to the field of electronic technology, in particular to an information processing method, storage medium and electronic equipment.
  • Smart mobile terminal devices spend more and more time with users.
  • mobile terminals can assist children in many activities such as learning and entertainment.
  • smart mobile terminals cannot recommend applications or content and control the terminal’s performance based on the user’s interest. access permission.
  • the embodiments of the present application provide an information processing method, storage medium, and electronic device, which can recommend interesting and appropriate content to the user according to the user's interest.
  • an embodiment of the present application provides an information processing method, and the information processing method includes:
  • the embodiments of the present application also provide a storage medium on which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the steps of the information processing method described above.
  • an embodiment of the present application also provides an electronic device.
  • the electronic device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the computer program is used to process information.
  • the processor calls The computer program stored in the memory is used to execute the steps of the information processing method described above.
  • FIG. 1 is a schematic diagram of an application scenario of an information processing method provided by an embodiment of the application.
  • FIG. 2 is a schematic diagram of the first flow of an information processing method provided by an embodiment of this application.
  • FIG. 3 is a schematic diagram of the second flow of an information processing method provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of a third flow of an information processing method provided by an embodiment of this application.
  • FIG. 5 is a diagram of another application scenario of the information processing method provided by an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of an information processing apparatus provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of the first structure of an electronic device provided by an embodiment of this application.
  • FIG. 8 is a schematic diagram of a second structure of an electronic device provided by an embodiment of this application.
  • FIG. 1 is a schematic diagram of an application scenario of an information processing method provided by an embodiment of the application.
  • the information processing method is applied to electronic equipment.
  • the electronic device may be a smart phone, a tablet computer, a game device, an augmented reality (Augmented Reality, AR) device, a car, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as a watch , Glasses, helmets, electronic bracelets, electronic necklaces, electronic clothing and other equipment.
  • the electronic device is provided with a panoramic sensing architecture.
  • the panoramic perception architecture is the integration of hardware and software used to implement the information processing method in an electronic device.
  • the panoramic perception architecture includes an information perception layer, a data processing layer, a feature extraction layer, a scenario modeling layer, and an intelligent service layer.
  • the information perception layer is used to obtain the information of the electronic device itself or the information in the external environment.
  • the information perception layer may include multiple sensors.
  • the information sensing layer includes multiple sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a Hall sensor, a position sensor, a gyroscope, an inertial sensor, a posture sensor, a barometer, and a heart rate sensor.
  • the distance sensor can be used to detect the distance between the electronic device and an external object.
  • the magnetic field sensor can be used to detect the magnetic field information of the environment in which the electronic device is located.
  • the light sensor can be used to detect the light information of the environment in which the electronic device is located.
  • the acceleration sensor can be used to detect the acceleration data of the electronic device.
  • the fingerprint sensor can be used to collect the user's fingerprint information.
  • Hall sensor is a kind of magnetic field sensor made according to Hall effect, which can be used to realize automatic control of electronic equipment.
  • the location sensor can be used to detect the current geographic location of the electronic device. Gyroscopes can be used to detect the angular velocity of electronic devices in various directions. Inertial sensors can be used to detect movement data of electronic devices.
  • the attitude sensor can be used to sense the attitude information of the electronic device.
  • the barometer can be used to detect the air pressure of the environment where the electronic device is located.
  • the heart rate sensor can be used to detect the user's heart rate information.
  • the data processing layer is used to process the data obtained by the information perception layer.
  • the data processing layer can perform data cleaning, data integration, data transformation, and data reduction on the data acquired by the information perception layer.
  • data cleaning refers to cleaning up a large amount of data obtained by the information perception layer to eliminate invalid data and duplicate data.
  • Data integration refers to the integration of multiple single-dimensional data acquired by the information perception layer into a higher or more abstract dimension to comprehensively process multiple single-dimensional data.
  • Data transformation refers to the data type conversion or format conversion of the data acquired by the information perception layer, so that the transformed data meets the processing requirements.
  • Data reduction means to minimize the amount of data while maintaining the original appearance of the data as much as possible.
  • the feature extraction layer is used to perform feature extraction on data processed by the data processing layer to extract features included in the data.
  • the extracted features can reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located.
  • the feature extraction layer can extract features or process the extracted features through methods such as filtering, packaging, and integration.
  • the filtering method refers to filtering the extracted features to delete redundant feature data.
  • the packaging method is used to screen the extracted features.
  • the integration method refers to the integration of multiple feature extraction methods to construct a more efficient and accurate feature extraction method for feature extraction.
  • the scenario modeling layer is used to construct a model based on the features extracted by the feature extraction layer, and the obtained model can be used to represent the state of the electronic device or the state of the user or the environment.
  • the scenario modeling layer can construct key value models, pattern identification models, graph models, entity connection models, object-oriented models, etc. based on the features extracted by the feature extraction layer.
  • the intelligent service layer is used to provide users with intelligent services based on the model constructed by the scenario modeling layer.
  • the intelligent service layer can provide users with basic application services, can perform system intelligent optimization for electronic devices, and can also provide users with personalized intelligent services.
  • the panoramic perception architecture may also include multiple algorithms, each of which can be used to analyze and process data, and the multiple algorithms can form an algorithm library.
  • the algorithm library may include Markov algorithm, implicit Dirichlet distribution algorithm, Bayesian classification algorithm, support vector machine, K-means clustering algorithm, K-nearest neighbor algorithm, conditional random field, residual network , Long and short-term memory networks, convolutional neural networks, recurrent neural networks and other algorithms.
  • the embodiment of the present application provides an information processing method, which includes:
  • the obtaining the emotional information of the user, and obtaining the user's interest in the application information according to the emotional information includes:
  • the interest degree is obtained according to the facial feature information and/or the audio feature information.
  • the obtaining the emotional information of the user, and obtaining the user's interest in the application information according to the emotional information includes:
  • the emotional information includes facial image information, audio information, screen touch information, and edge pressure information
  • the interest degree is obtained according to the facial feature information, the audio feature information, the screen touch feature information, and the edge pressure feature information.
  • the feature extraction is performed on the face image information, the audio information, the screen touch information, and the edge pressure information respectively to obtain the face feature information, audio feature information, screen touch feature information, and edge pressure Feature information includes:
  • obtaining the face position information from the face image information through a preset image algorithm includes:
  • the face position information is obtained through the support vector machine algorithm and the frame regression algorithm.
  • obtaining the face position information from the face image information through a preset image algorithm includes:
  • the face position information is obtained through the convolutional neural network algorithm model for the face image information.
  • the first audio algorithm is one of a fast Fourier transform algorithm and a Mel frequency cepstrum coefficient algorithm
  • the second audio algorithm is one of a cyclic neural network model algorithm or a time sequence analysis algorithm.
  • obtaining the degree of interest according to the facial feature information, the audio feature information, the screen touch feature information, and the edge pressure feature information includes:
  • the secondary interest level is used to adjust the primary interest level to obtain the adjusted interest level.
  • the obtaining application information currently viewed by the user, and obtaining the matching degree between the user and the application information according to the application information includes:
  • the screen image information is analyzed to obtain the first degree of matching
  • the interest matching degree is set as the second interest matching degree.
  • analyzing the screen picture information to obtain the first degree of matching includes:
  • the screen image is analyzed using a convolutional neural network to obtain the first matching degree
  • the interest matching degree is set to a third interest matching degree, and the third interest matching degree is greater than the second interest matching degree;
  • the calculating the interest matching degree according to the interest degree and the matching degree includes:
  • the interest degree and the first matching degree are multiplied to obtain the first interest matching degree.
  • the obtaining application information currently viewed by the user, and obtaining the matching degree between the user and the application information according to the application information includes:
  • the screen picture information and the application audio information are analyzed to obtain the second matching degree and the third matching degree;
  • the interest matching degree is set as the second interest matching degree.
  • analyzing the screen picture information and the application audio information to obtain the second matching degree and the third matching degree includes:
  • the interest matching degree is set to the third interest matching degree, and the third interest matching degree is greater than the second interest matching degree;
  • the calculating the interest matching degree according to the interest degree and the matching degree includes:
  • the interest degree is multiplied by the second matching degree and the third matching degree to obtain the first interest matching degree.
  • An embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, where the program is executed by a processor to implement the information processing method as described in any of the foregoing embodiments.
  • An embodiment of the present application also provides an electronic device, which includes a processor and a memory, the memory stores a computer program, the processor is connected to the memory, and the computer program is used to process information. By calling the computer program stored in the memory, the processor executes:
  • the processor In obtaining the emotional information of the user, and obtaining the user's interest in the application information according to the emotional information, the processor further executes:
  • the interest degree is obtained according to the facial feature information and/or the audio feature information.
  • the processor In obtaining the emotional information of the user, and obtaining the user's interest in the application information according to the emotional information, the processor further executes:
  • the emotional information includes facial image information, audio information, screen touch information, and edge pressure information
  • the interest degree is obtained according to the facial feature information, the audio feature information, the screen touch feature information, and the edge pressure feature information.
  • the processor In performing feature extraction on the face image information, the audio information, the screen touch information, and the edge pressure information respectively to obtain the face feature information, audio feature information, screen touch feature information, and edge pressure feature information ,
  • the processor also executes:
  • the processor further executes:
  • the secondary interest level is used to adjust the primary interest level to obtain the adjusted interest level.
  • the processor In obtaining the application information currently viewed by the user, and obtaining the matching degree between the user and the application information according to the application information, the processor further executes:
  • the screen image information is analyzed to obtain the first degree of matching
  • the interest matching degree is set as the second interest matching degree.
  • the processor executes:
  • the screen image is analyzed using a convolutional neural network to obtain the first matching degree
  • the interest matching degree is set to a third interest matching degree, and the third interest matching degree is greater than the second interest matching degree;
  • the calculating the interest matching degree according to the interest degree and the matching degree includes:
  • the interest degree and the first matching degree are multiplied to obtain the first interest matching degree.
  • FIG. 2 is a schematic flowchart of the first type of information processing method provided by an embodiment of the application. Among them, the information processing method includes the following steps:
  • the application information currently viewed by the user can be text information, image information, audio information, video information, etc., for example, the user opens the gallery application, browses the pictures in the gallery, or the user opens the music player, plays music, or the user opens the video player to play
  • the matching degree is obtained according to the application information currently viewed by the user, and the matching degree is used to determine whether the content currently viewed by the user is suitable for the user.
  • the content of the application suitable for children to browse should be content that is beneficial to the development of children, and the content of the application that is not suitable for children to browse should be content that is not beneficial to the physical
  • the browsed content obtains the matching degree.
  • the matching degree can be a numerical value or a level of information, which is used to determine whether the application information currently browsed by the user matches the user.
  • the user emotion information may include facial image information, which may be obtained through a camera module.
  • the facial image information is the facial image information of the user using the terminal application, and is used to determine the user's interest in the content of the terminal application.
  • the facial feature information is obtained.
  • the facial feature information can be a numerical value or a level of information.
  • the numerical value or level represents the user’s degree of interest in the current browsing content, for example, the user is browsing the application When the first type of content is smiling, and when browsing the second type of content of the application, the facial expression is laughing. It can be judged that the user is more interested in the first type of content than the second type of content.
  • the interest degree can be multiplied by the matching degree to obtain the interest matching degree.
  • the higher the interest matching degree it proves that the current user browsing application content is the user's interest and suitable for the user.
  • the content is the application content that the user is not interested in or inappropriate for the user, and the first interest matching degree may be a numerical value or a level of information.
  • the preset interest matching degree can be a numerical value. When the calculated interest matching degree is greater than the preset interest matching degree, it proves that the interest matching degree is high.
  • the application content viewed by the user is of interest to the user and matches the user’s application content.
  • the user will be pushed to the content associated with the content of the application. For example, when a child is browsing a gallery picture, it is determined through the user emotion collection that the child is interested in the picture of an animal, and the animal picture is Children's physical and mental development is beneficial, so in the subsequent terminal use process, push more animal-related information to users.
  • the first interest matching degree is not greater than the preset interest matching degree, reduce pushing content associated with application information, or restrict obtaining content associated with application information.
  • the preset interest matching degree can be a numerical value. When the calculated interest matching degree is not greater than the preset interest matching degree, it proves that the interest matching degree is low. When the interest matching degree is 0, it proves that the content is completely unsuitable for users. For example, when a child is browsing content that is harmful to children’s physical and mental development, the interest matching degree can be directly assigned to 0, which can restrict the child from browsing content that is not suitable for children. The user pushes the content associated with the content.
  • the application information currently viewed by the user and the user’s emotional information can be obtained through the information perception layer in the above embodiments.
  • the information perception layer can obtain information about the electronic device itself or information in the external environment, and the user is currently viewing
  • the application information of can be text information, image information, audio information, video information, etc.
  • the emotion information can include facial image information.
  • the data processing layer can process the data obtained by the information perception layer, and the data processing layer can perform data cleaning, data integration, data transformation, and data reduction on the data obtained by the information perception layer. Through the feature extraction layer, perform feature extraction on the data processed by the data processing layer to extract the features of the application information currently viewed by the user and the user's emotional information.
  • the application information restriction registration feature or type can be extracted through the application information being viewed Features, etc., can be extracted from the user’s emotional feature information to the emotional level features, etc., according to a variety of algorithms, the features extracted by the feature extraction layer can be calculated, and the features extracted by multiple algorithms and feature extraction layers can be modeled through the scene Establish a model in layers to obtain the matching degree between the user and the application information and the user’s interest in the application information.
  • the matching degree of the user’s interest in the application is calculated through the matching degree between the user and the application information and the user’s interest in the application information.
  • the matching degree user guides whether to push information related to the application to the user in the subsequent use of the electronic device.
  • the results obtained through the scenario modeling layer can be provided to the user with intelligent services. For example, when the interest matching degree is greater than the preset interest matching degree, the content associated with the application information is pushed; when the interest matching degree is not greater than the preset interest matching degree, pushing the content associated with the application information is reduced, Or restrict access to content associated with the application information.
  • this embodiment does not limit the execution order of the steps corresponding to 101 and the steps corresponding to 102.
  • FIG. 3 is a schematic flowchart of a second type of information processing method provided by an embodiment of this application.
  • obtaining user emotion information, and obtaining the interest degree according to the user emotion information may specifically include:
  • the emotional information includes facial image information, audio information, screen touch information, and edge pressure information.
  • Emotional information includes facial image information, audio information, screen touch information, and edge pressure information.
  • User emotional information can be obtained through various sensors, and facial image information, audio information, screen touch information, and edge pressure information can be obtained through camera modules, microphones, screen touch sensors, and edge pressure sensors.
  • the face image information can be processed by a preset image algorithm.
  • the preset image algorithm can be Support Vector Machine (SVM), bounding box algorithm and convolutional neural network algorithm model, for example, using support vector Machine algorithm (Support Vector Machine, SVM) plus bounding box algorithm can obtain face position information, and it can also obtain face position information through Convolutional Neural Networks (CNN).
  • Face position information includes people Face feature point information, use the hard clustering algorithm K-means to classify the pixel feature points of the face position to obtain facial feature information.
  • the face feature information can be a numerical value, which corresponds to the user’s mood index.
  • the facial feature information may also be level information, which corresponds to the mood level of the user.
  • the audio information can be processed by the first audio algorithm and the second audio algorithm.
  • the first audio algorithm can be Fast Fourier Transformation (FFT) or Mel Frequency Cepstrum Coefficient (MFCC) algorithm
  • the second audio algorithm may be a cyclic neural network model algorithm or a time sequence analysis algorithm, for example, using Fast Fourier Transformation (FFT) or Mel Frequency Cepstrum Coefficient (MFCC) algorithm for audio information
  • FFT Fast Fourier Transformation
  • MFCC Mel Frequency Cepstrum Coefficient
  • You can also obtain audio feature information through a time sequence analysis algorithm.
  • the audio feature information can be a numerical value corresponding to the user’s mood index.
  • the audio feature information may also be level information, which corresponds to the mood level of the user.
  • the existing emotional information is processed by corresponding methods to obtain corresponding characteristic information.
  • the audio feature information, screen touch feature information, and edge pressure feature information are used as secondary emotional feature information to adjust the interest index.
  • the audio feature information is used as the interest degree indicator
  • the interest degree indicator is the main interest degree
  • the screen touch feature information and the edge pressure feature information are used as the secondary emotion feature information.
  • the facial feature information is used as the main interest degree
  • the audio feature information is used as the secondary emotion feature information.
  • the existing screen touch feature information or edge pressure feature information is used as the secondary emotion feature information.
  • the audio feature information, screen touch feature information, and edge pressure feature information can be respectively multiplied by the interest degree coefficient, which can be customized and can be customized by human experts.
  • the calculated value is added to the interest degree indicator for interest Regulate the degree of interest, and use the adjusted main interest degree as the interest degree. If a single facial feature information is used as the interest degree, the obtained interest degree may be inaccurate.
  • the interest degree index can be adjusted through audio information, screen feature information and edge pressure feature information to improve the accuracy of the interest degree. .
  • the aforementioned method can also be used to adjust the interest index.
  • the screen feature information and the edge pressure feature information are respectively multiplied by the interest coefficient.
  • the coefficient can be customized, and can be customized by human experts.
  • the calculated value is added to the interest degree index for adjustment of the interest degree, and the interest degree index is obtained through audio feature information.
  • FIG. 4 is a schematic flowchart of a third information processing method provided by an embodiment of this application.
  • the application type label information may be age restriction grade labels, content restriction labels, and so on.
  • the user's viewing range can be customized by experts or users. For example, when the user is a 6-year-old child, the preset application type label range is 0-6 years old, and the current viewing user If the application age limit label is 4 years old, if the type label is within the preset application type label range, the screen image will be analyzed.
  • the screen image can be image information or text information. Use convolution for image information or text information
  • the neural network analyzes and obtains the first matching degree of the output.
  • the matching degree can be a value or level information. The higher the level or value, the higher the matching degree, and the lower the level or value, the lower the matching degree.
  • the user's viewing range can be customized by experts or user-defined. For example, when the user is a 6-year-old child, the application type label ranges from 0 to 6 years old, and the current application age limit If the tag is 18 years old, then the type of tag is not within the preset application type tag range, does not meet the viewing range, or the content restriction tag is a bloody violence tag, it does not meet the viewing range, when the viewing range is not met, it will directly match
  • the first interest is set as the second interest matching degree.
  • the second interest matching degree can be a small value such as 0 or 0.1 or 0.2. This value is used to restrict users from obtaining content associated with the application information, for example, when the user When you want to open the application again, the application is automatically closed, so that the user cannot open the application or get information related to the application.
  • the preset matching degree can be formulated by experts or obtained through system learning.
  • the preset matching degree can be obtained by learning the historical matching degree, and the value or level of the matching degree can be adjusted continuously to make the judgment of the matching degree more accurate.
  • the first matching degree is not less than the preset matching degree, the first matching degree is used for interest matching degree calculation, and step 307 is executed.
  • the interest matching degree is set to the third interest matching degree.
  • the interest matching degree is an intermediate value.
  • the third interest matching degree can be set to 0.5.
  • the third interest matching degree is The value can be 50.
  • the range of the interest matching degree is different, the value of the third interest matching degree is also different.
  • the preset interest matching degree can be formulated by experts or obtained through system learning.
  • the preset interest matching degree can be obtained by learning the historical interest matching degree, and the value of the preset interest degree can be adjusted continuously to make the interest matching degree better The judgment is more accurate, and the third interest matching degree can also be set to the preset interest matching degree.
  • the interest matching degree is greater than the preset interest matching degree, in the subsequent use of the terminal, the user will be pushed to the current browsing application information Associated content.
  • the content associated with the application information is reduced or the user is restricted from acquiring the content associated with the application information.
  • the application information currently viewed by the user is obtained, and the application information includes application type tag information; when the application type tag is within the preset application type tag range, the screen information and the application audio information are respectively scrolled Product neural network analysis to obtain the corresponding second matching degree and third matching degree; when one of the second matching degree and the third matching degree is less than the preset matching degree, the interest matching degree is set to the third interest matching degree, The third interest matching degree is greater than the second interest matching degree; when the second matching degree and the third matching degree are not less than the preset matching degree, the second matching degree and the third matching degree are used for the calculation of the interest matching degree; The interest degree is multiplied by the second matching degree and the third matching degree to obtain the first interest matching degree. When the application type label is not within the preset application type label range, the interest matching degree is set as the second interest matching degree.
  • the current application includes not only screen image information, but also application audio information, or the current application is video information, and the video information is a combination of screen image information and application audio information, Then, perform convolutional neural network analysis on the screen image information and the application audio information to obtain the corresponding second matching degree and third matching degree respectively.
  • Assign the interest matching degree to 0.5.
  • the second matching degree and the third matching degree are not less than the preset matching degree, the second matching degree and the third matching degree are used as the second information matching degree, and the interest degree is compared with The second matching degree and the third matching degree are multiplied to obtain the interest matching degree.
  • the interest matching degree is assigned a value of 0.
  • FIG. 5 is a diagram of another application scenario of the information processing method provided by an embodiment of the application.
  • the user’s emotion information can be obtained through various sensors.
  • the sensors can be camera modules, microphones, screen touch sensors, and edge pressure sensors.
  • the facial image information can be collected through the camera module, and the support vector machine (SVM) ) Plus the bounding box algorithm of bounding box regression can obtain face position information, and face position information can also be obtained through Convolutional Neural Networks (CNN).
  • Face position information includes facial feature point information, which can be obtained by hard aggregation
  • the class algorithm K-means classifies the emotions of the pixel feature points of the face position to obtain the face feature information.
  • the face feature information can be a value corresponding to the user's mood index, and the face feature information can also be a level Information. This level information corresponds to the user’s mood level.
  • the facial feature information of the user can be obtained by obtaining the facial image information of the user when browsing the application content.
  • the sound made by the user when browsing the content of the application can be collected through the microphone, and the audio information obtained through the microphone is obtained by using Fast Fourier Transformation (FFT) or Mel Frequency Cepstrum Coefficient (MFCC) algorithm
  • FFT Fast Fourier Transformation
  • MFCC Mel Frequency Cepstrum Coefficient
  • Spectrogram Use the cyclic neural network model to analyze the spectrogram to obtain audio feature information.
  • the audio feature information can also be obtained through a time sequence analysis algorithm.
  • the audio feature information can also be a level information, which corresponds to the user's mood level For example, when the user browses the application content of interest, he will emit some interjections or laugh, and the corresponding audio signal can be received through the microphone, and the corresponding audio feature information can be obtained after processing.
  • the screen touch sensor can collect the frequency of the user's click on the screen when browsing the application content, the time interval of clicking on the screen, and the pressure when the user clicks on the screen.
  • the screen touch is sensed by random forest classifier or Bayesian classifier.
  • the screen touch information collected by the device is modeled to obtain screen feature information.
  • the screen feature information is a value.
  • the edge pressure sensor is used to collect the pressure on the edge of the mobile terminal when the user browses the application content, and the random forest classifier or shell
  • the Yees classifier models the edge pressure information collected by the edge pressure sensor to obtain the edge pressure characteristic information.
  • the facial feature information, audio feature information, screen touch feature information, and edge pressure feature information are collected through the above sensors, and the above four feature information are fused and calculated to obtain the degree of interest. Specifically, it will be collected through the camera module
  • the facial feature information obtained from the facial image information is used as the interest index, and the interest index is adjusted according to the audio feature information, screen touch feature information, and edge pressure feature information.
  • the audio feature information, screen touch feature information, and The edge pressure feature information is multiplied by the interest degree coefficient.
  • the coefficient can be customized and can be customized by human experts.
  • the calculated value is added to the interest degree index for adjustment of the interest degree. If a single face feature is used Information as the degree of interest may be inaccurate. The accuracy of the degree of interest can be improved by adjusting the interest degree index through audio information, screen feature information, and edge pressure feature information.
  • the currently viewed application information can be text information, image information, audio information, video information, etc., for example, the user opens the gallery application, browses the pictures in the gallery, or the user opens the music player and plays music, Or the user opens the video player to play the video, and obtains the matching degree according to the application information currently viewed by the user, and the matching degree is used to determine whether the content currently browsed by the user is suitable for the user.
  • This type of label information can be age restriction grade labels, content restriction labels, etc., to determine whether the type of label information meets the user's viewing range.
  • the user's viewing range can be customized by experts or by users.
  • the matching degree is directly assigned to 0, and the user is restricted from obtaining the content associated with the application information, for example, child users are prohibited from opening the application, and when the application information meets the viewing range, the image information or application of the currently viewed application content is obtained
  • the image information or audio information is analyzed using a convolutional neural network to obtain the first matching degree.
  • the matching degree can be a value or level information.
  • the degree of interest is not suitable for users, but the application type label is within the preset type label range, so the interest matching degree is assigned an intermediate value.
  • the interest matching degree ranges from 0 to 1, it is possible to assign the interest matching degree to 0.5:
  • the first matching degree is greater than the preset matching degree, the first matching degree and the interest degree are calculated to obtain the interest matching degree.
  • the current application information is animal type information, which is beneficial to the physical and mental development of children.
  • the first matching degree is obtained by calculation, and the first matching degree is greater than the preset matching degree, then The first matching degree is used to calculate the interest matching degree in combination with the interest degree.
  • the current application information is online game information, this information is not beneficial to the child's physical and mental development.
  • the calculated matching degree is less than the preset matching degree, then it is directly Assign a value of 0.5 to the interest matching degree to reduce the push to users related to the application information.
  • the first matching degree and the interest degree are multiplied to obtain the interest matching degree.
  • the interest matching degree is greater than the preset interest matching degree, the information associated with the application information will be pushed to the user. Content.
  • the interest matching degree is not greater than the preset interest matching degree, the content associated with application information will be reduced to the user, or the content associated with application information will be restricted.
  • FIG. 6 is a schematic structural diagram of an information processing apparatus provided by an embodiment of this application.
  • the information processing apparatus 500 includes: a first acquisition module 501, a second acquisition module 502, a calculation module 503, a push module 504, and a restriction module 505.
  • the first obtaining module 501 is configured to obtain the application information currently viewed by the user, and obtain the matching degree between the user and the application information according to the application information.
  • the application information currently viewed by the user can be text information, image information, audio information, video information, etc., for example, the user opens the gallery application, browses the pictures in the gallery, or the user opens the music player, plays music, or the user opens the video player to play
  • the matching degree is obtained according to the application information currently viewed by the user, and the matching degree is used to determine whether the content currently viewed by the user matches the user.
  • the content of the application suitable for children to browse should be content that is beneficial to the development of children
  • the content of the application that is not suitable for children to browse should be content that is not beneficial to the physical and mental development of children, or content that is harmful to the physical and mental development of children.
  • the browsed content obtains the matching degree, and the matching degree can be a numerical value information or a level information.
  • the second obtaining module 502 is configured to obtain emotional information of the user, and obtain the user's interest in the application information according to the emotional information.
  • the user emotion information may include facial image information, which may be obtained through a camera module.
  • the facial image information is the facial image information of the user using the terminal application, and is used to determine the user's interest in the content of the terminal application. By classifying the emotion of the face image, the facial feature information is obtained.
  • the facial feature information can be a numerical value or a level of information. The numerical value or level represents the user’s degree of interest in the current browsing content, for example, the user is browsing the application When the content 1 is a smile, and when browsing the application content 2, the expression is a big laugh. It can be judged that the user is more interested in content 2 than content 1.
  • the calculation module 503 is configured to calculate the interest matching degree according to the interest degree and the matching degree.
  • the interest degree can be multiplied by the matching degree to obtain the interest matching degree.
  • the higher the interest matching degree it proves that the current user browsing application content is the user's interest and suitable for the user.
  • the content is the application content that the user is not interested in or inappropriate for the user, and the interest matching degree can be a numerical value or a level of information.
  • the pushing module 504 is configured to push the content associated with the application information when the interest matching degree is greater than the preset interest matching degree.
  • the preset interest matching degree can be a numerical value. When the calculated interest matching degree is greater than the preset interest matching degree, it proves that the interest matching degree is high.
  • the application content that the user browses is the application content that the user is interested in and is suitable for the user. In the subsequent use of the terminal, the user will be pushed to the content associated with the content of the application. For example, when a child is browsing a gallery picture, it is determined through the user emotion collection that the child is interested in the picture of an animal, and the animal picture is Children's physical and mental development is beneficial, so in the subsequent terminal use process, push more animal-related information to users.
  • the restriction module 505 is configured to reduce the pushing of content associated with the application information, or restrict the acquisition of content associated with the application information when the interest matching degree is not greater than the preset interest matching degree.
  • the preset interest matching degree can be a numerical value. When the calculated interest matching degree is not greater than the preset interest matching degree, it proves that the interest matching degree is low. When the interest matching degree is 0, it proves that the content is completely unsuitable for users. For example, when a child is browsing content that is harmful to children’s physical and mental development, the interest matching degree can be directly assigned to 0, which can restrict the child from browsing content that is not suitable for children. The user pushes the content associated with the content.
  • the embodiment of the application also provides an electronic device.
  • the electronic device may be a smart phone, a tablet computer, a game device, an augmented reality (Augmented Reality, AR) device, a car, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as a watch , Glasses, helmets, electronic bracelets, electronic necklaces, electronic clothing and other equipment.
  • an algorithm model is provided in the electronic device, the algorithm model includes a first algorithm module, and the first algorithm module is used to process a preset task.
  • FIG. 7 is a schematic diagram of the first structure of an electronic device 600 according to an embodiment of the application.
  • the electronic device 600 includes a processor 601 and a memory 602.
  • the processor 601 and the memory 302 are electrically connected.
  • the processor 601 is the control center of the electronic device 600. It uses various interfaces and lines to connect the various parts of the entire electronic device. It executes the electronic device by running or calling the computer program stored in the memory 602 and calling the data stored in the memory 602. Various functions and processing data of the equipment, so as to monitor the electronic equipment as a whole.
  • the processor 601 in the electronic device 600 will load the instructions corresponding to the process of one or more computer programs into the memory 602 according to the following steps, and the processor 601 will run the instructions stored in the memory 602 In order to realize various functions:
  • the memory 602 can be used to store computer programs and data.
  • the computer program stored in the memory 602 contains instructions that can be executed in the processor.
  • Computer programs can be composed of various functional modules.
  • the processor 601 executes various functional applications and data processing by calling a computer program stored in the memory 602.
  • FIG. 8 is a schematic diagram of a second structure of an electronic device 600 provided in an embodiment of the present application.
  • the electronic device 600 further includes a display screen 603, a control circuit 604, an input unit 605, a sensor 606, and a power supply 607.
  • the processor 601 is electrically connected to the display screen 603, the control circuit 604, the input unit 605, the sensor 606, and the power supply 607, respectively.
  • the display screen 603 can be used to display information input by the user or information provided to the user and various graphical user interfaces of the electronic device. These graphical user interfaces can be composed of images, text, icons, videos, and any combination thereof.
  • the control circuit 604 is electrically connected to the display screen 603 for controlling the display screen 603 to display information.
  • the input unit 605 may be used to receive inputted numbers, character information or user characteristic information (such as fingerprints), and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the input unit 605 may include a fingerprint recognition module.
  • the sensor 606 is used to collect information of the electronic device itself or information of the user or external environment information.
  • the sensor 606 may include multiple sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a Hall sensor, a position sensor, a gyroscope, an inertial sensor, a posture sensor, a barometer, and a heart rate sensor.
  • the power supply 607 is used to supply power to various components of the electronic device 600.
  • the power supply 607 may be logically connected to the processor 601 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
  • the electronic device 600 may also include a camera, a Bluetooth module, etc., which will not be repeated here.
  • an embodiment of the present application provides an electronic device, and the electronic device performs the following steps:
  • the embodiment of the present application also provides a storage medium in which a computer program is stored, and the computer program is used to process information.
  • the computer program runs on a computer, the computer executes any of the above The information processing method described in the embodiment.
  • the computer program when the computer program runs on a computer, the computer performs the following steps:
  • the storage medium may include, but is not limited to: Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种信息处理方法、存储介质及电子设备(600),该方法包括:获取当前被用户查看的应用信息,根据应用信息得到用户与应用信息的匹配度(101);根据用户的情绪信息得到用户对应用信息的兴趣度(102);根据兴趣度与匹配度计算得到第一兴趣匹配度(103);根据第一兴趣匹配度推送与应用信息关联的内容或减少推送与应用信息关联的内容。

Description

信息处理方法、存储介质及电子设备
本申请要求于2019年04月09日提交中国专利局、申请号为201910282187.1、申请名称为“信息处理方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,特别涉及一种信息处理方法、存储介质及电子设备。
背景技术
随着电子技术的发展,诸如智能手机等电子设备的智能化程度越来越高。智能移动终端设备陪伴用户的时间越来越多,对于家庭儿童来说,移动终端能够辅助儿童进行学习、娱乐等众多活动,但是智能移动终端不能根据用户的兴趣度推荐应用或内容并控制终端的访问权限。
发明内容
本申请实施例提供一种信息处理方法、存储介质及电子设备,可以根据用户的兴趣给用户推荐其感兴趣并且合适的内容。
第一方面,本申请实施例提供一种信息处理方法,所述信息处理方法包括:
获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度;
获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度根据所述兴趣度与所述匹配度计算得到第一兴趣匹配度;
当所述第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;
当所述第一兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
第二方面,本申请实施例还提供一种存储介质,其上存储有计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行上述信息处理方法的步骤。
第三方面,本申请实施例还提供一种电子设备,所述电子设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序用于对信息处理,所述处理器通过调用所述存储器中存储的所述计算机程序,用于执行上述信息处理方法的步骤。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的信息处理方法的应用场景示意图。
图2为本申请实施例提供的信息处理方法的第一种流程示意图。
图3为本申请实施例提供的信息处理方法的第二种流程示意图。
图4为本申请实施例提供的信息处理方法的第三种流程示意图。
图5为本申请实施例提供的信息处理方法的另一种应用场景图。
图6为本申请实施例提供的信息处理装置的结构示意图。
图7为本申请实施例提供的电子设备的第一种结构示意图。
图8为本申请实施例提供的电子设备的第二种结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本申请的保护范围。
参考图1,图1为本申请实施例提供的信息处理方法的应用场景示意图。所述信息处理方法应用于电子设备。所述电子设备可以是智能手机、平板电脑、游戏设备、增强现实(Augmented Reality,AR)设备、汽车、数据存储装置、音频播放装置、视频播放装置、笔记本、桌面计算设备、可穿戴设备诸如手表、眼镜、头盔、电子手链、电子项链、电子衣物等设备。所述电子设备中设置有全景感知架构。所述全景感知架构为电子设备中用于实现所述信息处理方法的硬件和软件的集成。
其中,全景感知架构包括信息感知层、数据处理层、特征抽取层、情景建模层以及智能服务层。
信息感知层用于获取电子设备自身的信息或者外部环境中的信息。所述信息感知层可以包括多个传感器。例如,所述信息感知层包括距离传感器、磁场传感器、光线传感器、加速度传感器、指纹传感器、霍尔传感器、位置传感器、陀螺仪、惯性传感器、姿态感应器、气压计、心率传感器等多个传感器。
其中,距离传感器可以用于检测电子设备与外部物体之间的距离。磁场传感器可以用于检测电子设备所处环境的磁场信息。光线传感器可以用于检测电子设备所处环境的光线信息。加速度传感器可以用于检测电子设备的加速度数据。指纹传感器可以用于采集用户的指纹信息。霍尔传感器是根据霍尔效应制作的一种磁场传感器,可以用于实现电子设备的自动控制。位置传感器可以用于检测电子设备当前所处的地理位置。陀螺仪可以用于检测电子设备在各个方向上的角速度。惯性传感器可以用于检测电子设备的运动数据。姿态感应器可以用于感应电子设备的姿态信息。气压计可以用于检测电子设备所处环境的气压。心率传感器可以用于检测用户的心率信息。
数据处理层用于对信息感知层获取到的数据进行处理。例如,数据处理层可以对信息感知层获取到的数据进行数据清理、数据集成、数据变换、数据归约等处理。
其中,数据清理是指对信息感知层获取到的大量数据进行清理,以剔除无效数据和重复数据。数据集成是指将信息感知层获取到的多个单维度数据集成到一个更高或者更抽象的维度,以对多个单维度的数据进行综合处理。数据变换是指对信息感知层获取到的数据进行数据类型的转换或者格式的转换等,以使变换后的数据满足处理的需求。数据归约是指在尽可能保持数据原貌的前提下,最大限度的精简数据量。
特征抽取层用于对数据处理层处理后的数据进行特征抽取,以提取所述数据中包括的特征。提取到的特征可以反映出电子设备自身的状态或者用户的状态或者电子设备所处环境的环境状态等。
其中,特征抽取层可以通过过滤法、包装法、集成法等方法来提取特征或者对提取到的特征进行处理。
过滤法是指对提取到的特征进行过滤,以删除冗余的特征数据。包装法用于对提取到的特征进行筛选。集成法是指将多种特征提取方法集成到一起,以构建一种更加高效、更加准确的特征提取方法,用于提取特征。
情景建模层用于根据特征抽取层提取到的特征来构建模型,所得到的模型可以用于表示电子设备的状态或者用户的状态或者环境状态等。例如,情景建模层可以根据特征抽取层提取到的特征来构建关键值模型、模式标识模型、图模型、实体联系模型、面向对象模型等。
智能服务层用于根据情景建模层所构建的模型为用户提供智能化的服务。例如,智能服务层可以为用户提供基础应用服务,可以为电子设备进行系统智能优化,还可以为用户提供个性化智能服务。
此外,全景感知架构中还可以包括多种算法,每一种算法都可以用于对数据进行分析处理,所述多种算法可以构成算法库。例如,所述算法库中可以包括马尔科夫算法、隐含狄里克雷分布算法、贝叶斯分类算法、支持向量机、K均值聚类算法、K近邻算法、条件随机场、残差网络、长短期记忆网络、卷积神经网络、循环神经网络等算法。
本申请实施例提供一种信息处理方法,其包括:
获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度;
获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度;
根据所述兴趣度与所述匹配度计算得到第一兴趣匹配度;
当所述第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;以及
当所述第一兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
其中,所述获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度包括:
获取用户使用终端应用时的情绪信息,所述情绪信息包括用户使用终端应用时的人脸图像信息和/或音频信息;
分别对所述人脸图像信息和/或所述音频进行特征抽取,得到人脸特征信息和/或音频特征信息;
根据所述人脸特征信息和/或所述音频特征信息得到所述兴趣度。
其中,所述获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度包括:
获取用户使用终端应用时的情绪信息,所述情绪信息包括人脸图像信息、音频信息、屏幕触摸信息和边沿压力信息;
分别对所述人脸图像信息、所述音频信息、所述屏幕触摸信息和所述边沿压力信息进行特征抽取, 得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息;
根据所述人脸特征信息、所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息得到所述兴趣度。
其中,所述分别对所述人脸图像信息、所述音频信息、所述屏幕触摸信息和所述边沿压力信息进行特征抽取,得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息包括:
对所述人脸图像信息通过预设图像算法得到人脸位置信息,通过硬聚类算法对所述人脸位置信息进行情绪分类,得到人脸特征信息;
对所述音频信息使用第一音频算法获得频谱图,通过第二音频算法对所述频谱图进行分析得到音频特征信息;
对所述屏幕触摸信息和所述边沿压力信息使用随机森林分页器或贝叶斯分类器建立模型,得到屏幕触摸特征信息和边沿压力特征信息。
其中,所述对所述人脸图像信息通过预设图像算法得到人脸位置信息包括:
对所述人脸图像信息通过支持向量机算法和边框回归算法得到人脸位置信息。
其中,所述对所述人脸图像信息通过预设图像算法得到人脸位置信息包括:
对所述人脸图像信息通过卷积神经网络算法模型得到人脸位置信息。
其中,所述第一音频算法为快速傅式变换算法和梅尔频率倒谱系数算法中的一项,所述第二音频算法为循环神经网络模型算法或时序分析算法中的一项。
其中,所述根据所述人脸特征信息、所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息得到所述兴趣度包括:
将所述人脸特征信息作为主要兴趣度;
将所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息中的至少一项作为次要兴趣度;
利用所述次要兴趣度对所述主要兴趣度进行调节,得到调节后的兴趣度。
其中,所述获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度包括:
获取当前被用户查看的应用信息,所述应用信息包括应用类型标签信息;
当所述应用类型标签在预设应用类型标签范围内时,对屏幕画面信息进行分析,得到第一匹配度;
当所述应用类型标签不在预设应用类型标签范围内时,将所述兴趣匹配度设为第二兴趣匹配度。
其中,所述当所述应用类型标签在预设应用类型标签范围内时,对所述屏幕画面信息进行分析,得到第一匹配度包括:
当所述应用类型标签在预设应用类型标签范围内时,对所述屏幕画面使用卷积神经网络进行分析,得到第一匹配度;
当所述第一匹配度小于所述预设匹配度时,将所述兴趣匹配度设为第三兴趣匹配度,所述第三兴趣匹配度大于所述第二兴趣匹配度;
当所述第一匹配度不小于所述预设匹配度时,将所述第一匹配度用于兴趣匹配度的计算;
所述根据所述兴趣度与所述匹配度计算得到兴趣匹配度包括:
将所述兴趣度与所述第一匹配度相乘,得到第一兴趣匹配度。
其中,所述获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度包括:
获取当前被用户查看的应用信息,所述应用信息包括应用类型标签信息;
当所述应用类型标签在预设应用类型标签范围内时,对屏幕画面信息和应用音频信息进行分析,得到第二匹配度和第三匹配度;
当所述应用类型标签不在预设应用类型标签范围内时,将所述兴趣匹配度设为第二兴趣匹配度。
其中,所述对屏幕画面信息和应用音频信息进行分析,得到第二匹配度和第三匹配度包括:
通过卷积神经网络分别对所述屏幕画面信息和所述应用音频信息进行分析,得到对应的第二匹配度和第三匹配度;
当所述第二匹配度和所述第三匹配度中的一个小于预设匹配度时,将兴趣匹配度设为第三兴趣匹配度,所述第三兴趣匹配度大于所述第二兴趣匹配度;
当所述第二匹配度和所述第三匹配度都不小于所述预设匹配度时,将所述第二匹配度与所述第三匹配度用于兴趣匹配度的计算;
所述根据所述兴趣度与所述匹配度计算得到兴趣匹配度包括:
将所述兴趣度与所述第二匹配度和第三匹配度相乘,得到第一兴趣匹配度。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,其中,所述程序被处理器执行时,实现如上述任一实施例所述的信息处理方法。
本申请实施例还提供一种电子设备,其包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器与所述存储器连接,所述计算机程序用于对信息处理,所述处理器通过调用所述存储器中存储的所述计算机程序,所述处理器执行:
获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度;
获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度;
根据所述兴趣度与所述匹配度计算得到第一兴趣匹配度;
当所述第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;以及
当所述第一兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
在获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度中,所述处理器还执行:
获取用户使用终端应用时的情绪信息,所述情绪信息包括用户使用终端应用时的人脸图像信息和/或音频信息;
分别对所述人脸图像信息和/或所述音频进行特征抽取,得到人脸特征信息和/或音频特征信息;
根据所述人脸特征信息和/或所述音频特征信息得到所述兴趣度。
在获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度中,所述处理器还执行:
获取用户使用终端应用时的情绪信息,所述情绪信息包括人脸图像信息、音频信息、屏幕触摸信息和边沿压力信息;
分别对所述人脸图像信息、所述音频信息、所述屏幕触摸信息和所述边沿压力信息进行特征抽取,得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息;
根据所述人脸特征信息、所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息得到所述兴趣度。
在分别对所述人脸图像信息、所述音频信息、所述屏幕触摸信息和所述边沿压力信息进行特征抽取,得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息中,所述处理器还执行:
对所述人脸图像信息通过预设图像算法得到人脸位置信息,通过硬聚类算法对所述人脸位置信息进行情绪分类,得到人脸特征信息;
对所述音频信息使用第一音频算法获得频谱图,通过第二音频算法对所述频谱图进行分析得到音频特征信息;
对所述屏幕触摸信息和所述边沿压力信息使用随机森林分页器或贝叶斯分类器建立模型,得到屏幕触摸特征信息和边沿压力特征信息。
在根据所述人脸特征信息、所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息得到所述兴趣度中,所述处理器还执行:
将所述人脸特征信息作为主要兴趣度;
将所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息中的至少一项作为次要兴趣度;
利用所述次要兴趣度对所述主要兴趣度进行调节,得到调节后的兴趣度。
在获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度中,所述处理器还执行:
获取当前被用户查看的应用信息,所述应用信息包括应用类型标签信息;
当所述应用类型标签在预设应用类型标签范围内时,对屏幕画面信息进行分析,得到第一匹配度;
当所述应用类型标签不在预设应用类型标签范围内时,将所述兴趣匹配度设为第二兴趣匹配度。
在当所述应用类型标签在预设应用类型标签范围内时,对屏幕画面信息进行分析,得到第一匹配度中,所述处理器还执行:
当所述应用类型标签在预设应用类型标签范围内时,对所述屏幕画面使用卷积神经网络进行分析,得到第一匹配度;
当所述第一匹配度小于所述预设匹配度时,将所述兴趣匹配度设为第三兴趣匹配度,所述第三兴趣匹配度大于所述第二兴趣匹配度;
当所述第一匹配度不小于所述预设匹配度时,将所述第一匹配度用于兴趣匹配度的计算;
所述根据所述兴趣度与所述匹配度计算得到兴趣匹配度包括:
将所述兴趣度与所述第一匹配度相乘,得到第一兴趣匹配度。
参考图2,图2为本申请实施例提供的信息处理方法的第一种流程示意图。其中,信息处理方法包括以下步骤:
101,获取当前被用户查看的应用信息,根据应用信息得到用户与应用信息的匹配度。
用户当前查看的应用信息可以为文字信息、图像信息、音频信息、视频信息等,例如用户打开图库应用,浏览图库中的图片,或者用户打开音乐播放器,播放音乐,或者用户打开视频播放器播放视频,根据用户当前查看的应用信息得到匹配度,该匹配度用于判断用户当前浏览的内容是否合适该用户。例如,用户为儿童时,合适儿童浏览的应用内容应该是对于儿童发展有益的内容,不合适儿童浏览的应用内容应该是对儿童身心发展无益的内容,或对儿童身心发展有害的内容,根据当前浏览的内容得到匹配度,匹配度可以为一个数值信息或一个等级信息,用于判断用户当前浏览的应用信息是否与用户匹配。
102,获取用户的情绪信息,根据情绪信息得到用户对应用信息的兴趣度。用户情绪信息可以包括人脸图像信息,可以通过摄像头模组获取人脸图像信息,该人脸图像信息为用户使用终端应用的人脸图像信息,用于判断用户对于终端应用内容的感兴趣程度。通过对人脸图像进行情绪分类,得到人脸特征信息,人脸特征信息可以为一个数值信息或者一个等级信息,该数值或等级代表用户对于当前浏览内容的感兴趣程度,例如,用户在浏览应用的第一种内容时表情为微笑,在浏览应用的第二种内容时表情为大笑,则可以判断用户对于第一种内容感兴趣的程度大于第二种内容。
103,根据兴趣度与匹配度计算得到第一兴趣匹配度。
可以将兴趣度与匹配度相乘得到兴趣匹配度,兴趣匹配度越高,证明当前用户浏览的应用内容是用户感兴趣且合适用户的应用内容,兴趣匹配度越低,则当前用户浏览的应用内容是用户不感兴趣或不合适用户的应用内容,第一兴趣匹配度可以是一个数值信息或者一个等级信息。
104,当第一兴趣匹配度大于预设兴趣匹配度时,推送与应用信息关联的内容。
预设兴趣匹配度可以是一个数值,当计算得到的兴趣匹配度大于预设兴趣匹配度时,则证明兴趣匹配度较高,用户浏览的应用内容是用户感兴趣且匹配用户的应用内容,在后续的终端使用过程中,向用户推送与该应用内容关联的内容,例如当儿童在浏览图库图片时,通过用户情绪收集判断出该儿童对动物的图片感兴趣,且通过判断得出动物图片对儿童身心发展有益,则在后续的终端使用过程中,向用户推送更多与动物相关的信息。
105,当第一兴趣匹配度不大于预设兴趣匹配度时,减少推送与应用信息关联的内容,或限制获取与应用信息关联的内容。
预设兴趣匹配度可以是一个数值,当计算得到的兴趣匹配度不大于预设兴趣匹配度时,则证明兴趣匹配度低,当兴趣匹配度为0时,则证明该内容完全不适合用户,例如,当儿童在浏览对少儿身心发展有害的内容时,直接将兴趣匹配度赋值为0,可以限制该儿童浏览少儿不宜的内容,当儿童在浏览一些对于儿童身心发展无益的内容时,减少向用户推送与该内容关联的内容。
在一些实施例中,可以通过上述实施例中的信息感知层获取当前被用户查看的应用信息和用户的情绪信息,信息感知层可以获取电子设备自身的信息或者外部环境中的信息,用户当前查看的应用信息可以为文字信息、图像信息、音频信息、视频信息等,情绪信息可以包括人脸图像信息。数据处理层可以对信息感知层获取到的数据进行处理,数据处理层可以对信息感知层获取到的数据进行数据清理、数据集成、数据变换、数据归约等处理。通过特征抽取层对数据处理层处理后的数据进行特征抽取,以提取当前被用户查看的应用信息和用户的情绪信息的特征,通过正在查看的应用信息可以提取出该应用信息限制登记特征或类型特征等,通过用户的情绪特征信息可以提取到情绪等级特征等,根据多种算法将特征抽取层提取到的特征进行计算,还可以根据多种算法和特征抽取层提取到的特征通过情景建模层建立模型,得到用户与应用信息的匹配度和用户对应用信息的兴趣度,通过用户与应用信息的匹配度和用户对应用信息的兴趣度计算得到用户对该应用的兴趣匹配度,该兴趣匹配度用户指导在后续的电子设备的使用中,是否向用户推送与该应用相关的信息,根据智能服务层可以将通过情景建模层得到的结果为用户提供智能化的服务。例如,当兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;当兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
需要说明的是,本实施例对101对应步骤和102对应步骤的执行顺序不做限定。
参考图3,图3为本申请实施例提供的信息处理方法的第二种流程示意图。
在一些实施例中,获取用户情绪信息,根据用户情绪信息得到兴趣度具体可以包括:
201,获取用户使用终端应用时的情绪信息,情绪信息包括人脸图像信息、音频信息、屏幕触摸信息和边沿压力信息。
情绪信息包括人脸图像信息、音频信息、屏幕触摸信息和边沿压力信息。
可以通过各种传感器获取用户情绪信息,通过摄像头模组、麦克风、屏幕触摸感应器以及边沿压力传感器分别获取人脸图像信息、音频信息、屏幕触摸信息和边沿压力信息。
在一些实施例中,可能存在人脸图像信息不存在的情况,获取音频信息、屏幕触摸信息和边沿压力信息。在一些实施例中,可能存在音频信息不存在的情况,获取人脸图像信息、屏幕触摸信息和边沿压力信息。在一些实施例中,可能存在只获取到人脸图像信息和音频信息的情况,获取人脸图像信息和音频信息,情绪信息种类不会理解为对本申请的限制。
202,分别对人脸图像信息、音频信息、屏幕触摸信息和边沿压力信息进行特征抽取,得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息。
可以通过预设图像算法对人脸图像信息进行处理,预设图像算法可以是支持向量机算法(Support Vector Machine,SVM)、边框回归bounding box算法和卷积神经网络算法模型,例如,使用支持向量机算法(Support Vector Machine,SVM)加上边框回归bounding box算法可以获取人脸位置信息,还可以通过卷积神经网络模型(Convolutional Neural Networks,CNN)获取人脸位置信息,人脸位置信息包括人脸特征点信息,用硬聚类算法K-means对人脸位置的像素特征点进行情绪分类,得到人脸特征信息,人脸特征信息可以是一个数值,该数值对应于用户的心情指数,人脸特征信息还可以是一个等级信息, 该等级信息对应于用户的心情等级。
可以通过第一音频算法和第二音频算法对音频信息进行处理,第一音频算法可以是快速傅式变换(Fast Fourier Transformation,FFT)或者梅尔频率倒谱系数(Mel Frequency Cepstrum Coefficient,MFCC)算法,第二音频算法可以是循环神经网络模型算法或时序分析算法,例如,对音频信息使用快速傅式变换(Fast Fourier Transformation,FFT)或者梅尔频率倒谱系数(Mel Frequency Cepstrum Coefficient,MFCC)算法获得频谱图,将频谱图使用循环神经网络模型进行分析,得到音频特征信息,还可以通过时序分析算法得到音频特征信息,该音频特征信息可以是一个数值,该数值对应于用户的心情指数,该音频特征信息还可以是一个等级信息,该等级信息对应于用户的心情等级。
对屏幕触摸信息和边沿压力信息使用随机森林分类器或者贝叶斯分类器对屏幕触摸感应器收集到的屏幕触摸信息进行建模分别得到屏幕特征信息和边沿压力特征信息。
当上述四种情绪信息其中一种或一种以上不存在时,对存在的情绪信息进行相应方法的处理,得到对应的特征信息。
203,将人脸特征信息作为主要兴趣度。
对上述四种特征信息进行融合计算,将人脸特征信息作为兴趣度指标,该兴趣度指标为主要兴趣度。
204,将音频特征信息、屏幕触摸特征信息和边沿压力特征信息中的至少一项作为次要兴趣度。
将音频特征信息、屏幕触摸特征信息和边沿压力特征信息作为次要情绪特征信息,用于调节对兴趣度指标的调节。
在一些实施例中,当人脸特征信息不存在时,将音频特征信息作为兴趣度指标,该兴趣度指标为主要兴趣度,同时将屏幕触摸特征信息和边沿压力特征信息作为次要情绪特征信息。
在一些实施例中,当只存在人脸特征信息和音频特征信息时,将人脸特征信息作为主要兴趣度,音频特征信息作为次要情绪特征信息。
在一些实施例中,当屏幕触摸特征信息和边沿压力特征信息仅存在其中一种时,将存在的屏幕触摸特征信息或边沿压力特征信息作为次要情绪特征信息。
205,利用次要兴趣度对主要兴趣度进行调节,得到调节后的兴趣度。
可以将音频特征信息、屏幕触摸特征信息以及边沿压力特征信息分别乘以兴趣度系数,该系数可以自定义,可以通过人工专家定制,将计算得到的数值加入到兴趣度指标中,用于对兴趣度的调节,将调节后的主要兴趣度作为兴趣度。如果用单一的人脸特征信息作为兴趣度,有可能出现获得的兴趣度不准确的现象,通过音频信息、屏幕特征信息以及边沿压力特征信息对兴趣度指标进行调节,可以提高兴趣度的准确度。
在一些实施例中,当人脸特征信息不存在时,也可用上述方法对于兴趣度指标进行调节,当人脸特征信息不存在时,将屏幕特征信息以及边沿压力特征信息分别乘以兴趣度系数,该系数可以自定义,可以通过人工专家定制,将计算得到的数值加入到兴趣度指标中,用于对兴趣度的调节,该兴趣度指标通过音频特征信息得到。
参考图4,图4为本申请实施例提供的信息处理方法的第三种流程示意图。
301,获取当前被用户查看的应用信息,应用信息包括应用类型标签信息。
应用类型标签信息可以为年龄限制等级标签、内容限制标签等。
302,当应用类型标签在预设应用类型标签范围内时,对屏幕画面使用卷积神经网络进行分析,得到第一匹配度。
判断该类型标签信息是否符合用户的查看范围,用户的查看范围可以通过专家定制或者用户自定义,例如,当用户为6岁儿童时,预设应用类型标签范围为0~6岁,当前查看用户应用年龄限制标签为4岁,则该类型标签在预设应用类型标签范围内,则对屏幕画面进行分析,屏幕画面可以为图像信息,也可以为文字信息,将图像信息或文字信息使用卷积神经网络进行分析,得到输出的第一匹配度,该匹配度可以为一个数值,也可以为等级信息,等级或数值越高匹配度越高,等级或数值越低匹配度越低,该匹配度用户判断该内容是否对用户有益,当得到第一匹配度后,执行步骤305。
303,当应用类型标签不在预设应用类型标签范围内时,将兴趣匹配度设为第二兴趣匹配度。
判断该类型标签信息是否符合用户的查看范围,用户的查看范围可以通过专家定制或者用户自定义,例如,当用户为6岁儿童时,应用类型标签范围为0~6岁,当前应用的年龄限制标签为18岁,则该类型标签不在预设应用类型标签范围内,不满足查看范围,或者内容限制标签为血腥暴力的标签时,也不满足查看范围,当不满足查看范围时,直接将匹配度赋值第一兴趣设为第二兴趣匹配度,第二兴趣匹配度可以为0或0.1或0.2等较小的值,该值用于限制用户获取与所述应用信息关联的内容,例如当用户想再次打开该应用时,自动关闭该应用,使用户不能打开该应用或不能得到与该应用相关的信息。
304,当第一匹配度不小于所述预设匹配度时,将第一匹配度用于兴趣匹配度的计算。
预设匹配度可以通过专家进行制定,也可以通过系统学习得到,可以通过对历史匹配度的学习,得到预设匹配度,不断调整匹配度的数值或等级,使得对于匹配度的判断更准确。当第一匹配度不小于预设匹配度时,将该第一匹配度用于兴趣匹配度的计算,执行步骤307。
305,当第一匹配度小于预设匹配度时,将兴趣匹配度设为第三兴趣匹配度,第三兴趣匹配度大于第二兴趣匹配度。
当第一匹配度小于预设匹配度时,表示该第一匹配度不适合用户,但该应用类型标签在预设类型标签范围内,所以将兴趣匹配度设为第三兴趣匹配度,第三兴趣匹配度为一个中间值,当兴趣匹配度的范围为0~1时,可以将第三兴趣匹配度的值可以为0.5,当兴趣匹配度的范围为0~100,第三兴趣匹配度的值可以为50,当兴趣匹配度的范围不同时,第三兴趣匹配度的值也不同。
306,获取用户的情绪信息,根据情绪信息得到用户对应用信息的兴趣度;
具体的可以参考上述实施例中201~205的步骤,在此不再赘述。
307,将兴趣度与第一匹配度相乘,得到第一兴趣匹配度。
308,当第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容。
预设兴趣匹配度可以通过专家进行制定,也可以通过系统学习得到,可以通过对历史兴趣匹配度的学习,得到预设兴趣匹配度,不断调整预设兴趣度的数值,使得对于兴趣匹配度的判断更准确,也可以将第三兴趣匹配度设置为预设兴趣匹配度,当兴趣匹配度大于预设兴趣匹配度时,在后续终端的使用过 程中,向用户推送与该当前浏览应用的信息关联的内容。
309,当第一兴趣匹配度不大于预设兴趣匹配度时,减少推送与应用信息关联的内容,或限制获取与应用信息关联的内容。
当第一兴趣匹配度不大于预设兴趣匹配度时,在后续终端的使用过程中,减少推送与应用信息关联的内容,或限制用户获取与应用信息关联的内容。
在一些实施例中,获取当前被用户查看的应用信息,应用信息包括应用类型标签信息;当所述应用类型标签在预设应用类型标签范围内时,分别对屏幕画面信息和应用音频信息进行卷积神经网络分析,得到对应的第二匹配度和第三匹配度;当第二匹配度和第三匹配度中的一个小于预设匹配度时,将兴趣匹配度设为第三兴趣匹配度,第三兴趣匹配度大于第二兴趣匹配度;当第二匹配度和第三匹配度都不小于预设匹配度时,将第二匹配度与第三匹配度用于兴趣匹配度的计算;将兴趣度与第二匹配度和第三匹配度相乘,得到第一兴趣匹配度。当应用类型标签不在预设应用类型标签范围内时,将兴趣匹配度设为第二兴趣匹配度。
当所述应用类型标签在预设应用类型标签范围内时,当前应用不止包括屏幕画面信息,还包括应用音频信息,或者当前应用为视频信息,视频信息为屏幕画面信息与应用音频信息的结合,则对屏幕画面信息和应用音频信息进行卷积神经网络分析,分别得到对应的第二匹配度和第三匹配度,当第二匹配度和第三匹配度中的一个小于预设匹配度时,将兴趣匹配度赋值为0.5,当第二匹配度和所述第三匹配度都不小于预设匹配度时,将第二匹配度与第三匹配度作为第二信息匹配度,将兴趣度与第二匹配度和第三匹配度相乘,得到兴趣匹配度,当应用类型标签不在预设应用类型标签范围内时,将兴趣匹配度赋值为0。
参考图5,图5为本申请实施例提供的信息处理方法的另一种应用场景图。
可以通过各种传感器获取用户情绪信息,传感器可以为摄像头模组、麦克风、屏幕触摸感应器以及边沿压力传感器,通过摄像头模组可以收集到人脸图像信息,通过支持向量机(Support Vector Machine,SVM)加上边框回归bounding box算法可以获取人脸位置信息,还可以通过卷积神经网络模型(Convolutional Neural Networks,CNN)获取人脸位置信息,人脸位置信息包括人脸特征点信息,用硬聚类算法K-means对人脸位置的像素特征点进行情绪分类,得到人脸特征信息,人脸特征信息可以是一个数值,该数值对应于用户的心情指数,人脸特征信息还可以是一个等级信息,该等级信息对应于用户的心情等级,例如,用户在浏览自己感兴趣的内容时,表情是开心的,用户在浏览自己不怎么感兴趣的内容时,会面无表情,或者不开心的,通过获取用户在浏览应用内容时的人脸图像信息可以得到用户人脸特征信息。
通过麦克风可以收集到用户浏览应用内容时发出的声音,通过麦克风获取到的音频信息使用快速傅式变换(Fast Fourier Transformation,FFT)或者梅尔频率倒谱系数(Mel Frequency Cepstrum Coefficient,MFCC)算法获得频谱图,将频谱图使用循环神经网络模型进行分析,得到音频特征信息,还可以通过时序分析算法得到音频特征信息,该音频特征信息还可以是一个等级信息,该等级信息对应于用户的心情等级,例如,用户在浏览到感兴趣的应用内容时,会发出一些感叹词,或者会发出笑声,通过麦克风 可以接收到相应的音频信号,经过处理可以得到相应的音频特征信息。
通过屏幕触摸感应器可以收集用户浏览应用内容时点击频幕的频率,以及点击频幕的时间间隔,以及用户点击屏幕时的压强大小,通过随机森林分类器或者贝叶斯分类器对屏幕触摸感应器收集到的屏幕触摸信息进行建模,得到屏幕特征信息,该屏幕特征信息为一个数值,通过边沿压力传感器收集用户浏览应用内容时手握移动终端边沿的压力大小,通过随机森林分类器或者贝叶斯分类器对边沿压力传感器收集到的边沿压力信息进行建模,得到边沿压力特征信息。
通过上述传感器将收集到人脸特征信息、音频特征信息、屏幕触摸特征信息以及边沿压力特征信息,将上述四种特征信息进行融合计算,得到兴趣度,具体的,将通过摄像头模组收集到的人脸图像信息得到的人脸特征信息作为兴趣度指标,根据音频特征信息、屏幕触摸特征信息以及边沿压力特征信息对兴趣度指标进行调节,具体的,可以将音频特征信息、屏幕触摸特征信息以及边沿压力特征信息分别乘以兴趣度系数,该系数可以自定义,可以通过人工专家定制,将计算得到的数值加入到兴趣度指标中,用于对兴趣度的调节,如果用单一的人脸特征信息作为兴趣度,有可能出现获得的兴趣度不准确的现象,通过音频信息、屏幕特征信息以及边沿压力特征信息对兴趣度指标进行调节,可以提高兴趣度的准确度。
获取用户当前查看的应用信息,当前查看的应用信息可以为文字信息、图像信息、音频信息、视频信息等,例如用户打开图库应用,浏览图库中的图片,或者用户打开音乐播放器,播放音乐,或者用户打开视频播放器播放视频,根据用户当前查看的应用信息得到匹配度,该匹配度用于判断用户当前浏览的内容是否合适该用户。
首先,获取当前查看应用的类型标签信息,该类型标签信息可以为年龄限制等级标签、内容限制标签等,判断该类型标签信息是否符合用户的查看范围,用户的查看范围可以通过专家定制或者用户自定义,例如,当用户为6岁儿童时,年龄限制等级标签为18岁的应用则不满足查看范围,或者内容限制标签为血腥暴力的标签时,也不满足查看范围,当不满足查看范围时,直接将匹配度赋值为0,并限制用户获取与所述应用信息关联的内容,例如禁止儿童用户打开该应用,当该应用信息满足查看范围时,获取当前查看的应用内容的图像信息或应用音频信息,将图像信息或音频信息使用卷积神经网络进行分析,得到第一匹配度,该匹配度可以为一个数值,也可以为等级信息,等级或数值越高匹配度越高,等级或数值越低匹配度越低,该匹配度用户判断该内容是否对用户有益,通过将第一匹配度与预设匹配度进行对比,当第一匹配度小于预设匹配度时,表示该第一匹配度不适合用户,但该应用类型标签在预设类型标签范围内,所以将兴趣匹配度赋以一个中间值,当兴趣匹配度的范围为0~1时,这是可以将兴趣匹配度赋值为0.5,当第一匹配度大于预设匹配度时,将第一匹配度与兴趣度计算得到兴趣匹配度。例如,通过卷积神经网络进行分析出当前应用的信息为动物类型信息,该动物类型信息对于儿童的身心发展有益,通过计算得到第一匹配度,该第一匹配度大于预设匹配度,则将第一匹配度用于与兴趣度结合计算兴趣匹配度,当分析出当前应用的信息为网络游戏信息,该信息对于儿童的身心发展无益,计算得到的匹配度小于预设匹配度,则直接将兴趣匹配度赋值为0.5,减少向用户推送与该应用信息关联的内容。
进一步的,当得到第一匹配度时,将第一匹配度与兴趣度相乘,得到兴趣匹配度,当兴趣匹配度大 于预设兴趣匹配度时,将向用户推送与所述应用信息关联的内容,当兴趣匹配度不大于预设兴趣匹配度时,将减少向用户推送与应用信息关联的内容,或限制获取与应用信息关联的内容。
参考图6,图6为本申请实施例提供的信息处理装置的结构示意图。
其中,所述信息处理装置500包括:第一获取模块501、第二获取模块502、计算模块503、推送模块504和限制模块505。
第一获取模块501,用于获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度。
用户当前查看的应用信息可以为文字信息、图像信息、音频信息、视频信息等,例如用户打开图库应用,浏览图库中的图片,或者用户打开音乐播放器,播放音乐,或者用户打开视频播放器播放视频,根据用户当前查看的应用信息得到匹配度,该匹配度用于判断用户当前浏览的内容与该用户是否匹配。例如,用户为儿童时,合适儿童浏览的应用内容应该是对于儿童发展有益的内容,不合适儿童浏览的应用内容应该是对儿童身心发展无益的内容,或对儿童身心发展有害的内容,根据当前浏览的内容得到匹配度,匹配度可以为一个数值信息或一个等级信息。
第二获取模块502,用于获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度。
用户情绪信息可以包括人脸图像信息,可以通过摄像头模组获取人脸图像信息,该人脸图像信息为用户使用终端应用的人脸图像信息,用于判断用户对于终端应用内容的感兴趣程度。通过对人脸图像进行情绪分类,得到人脸特征信息,人脸特征信息可以为一个数值信息或者一个等级信息,该数值或等级代表用户对于当前浏览内容的感兴趣程度,例如,用户在浏览应用内容1时表情为微笑,在浏览应用内容2时表情为大笑,则可以判断用户对于内容2感兴趣的程度大于内容1。
计算模块503,用于根据所述兴趣度与所述匹配度计算得到兴趣匹配度。
可以将兴趣度与匹配度相乘得到兴趣匹配度,兴趣匹配度越高,证明当前用户浏览的应用内容是用户感兴趣且合适用户的应用内容,兴趣匹配度越低,则当前用户浏览的应用内容是用户不感兴趣或不合适用户的应用内容,兴趣匹配度可以是一个数值信息或者一个等级信息。
推送模块504,用于当所述兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容。
预设兴趣匹配度可以是一个数值,当计算得到的兴趣匹配度大于预设兴趣匹配度时,则证明兴趣匹配度较高,用户浏览的应用内容是用户感兴趣且合适用户的应用内容,在后续的终端使用过程中,向用户推送与该应用内容关联的内容,例如当儿童在浏览图库图片时,通过用户情绪收集判断出该儿童对动物的图片感兴趣,且通过判断得出动物图片对儿童身心发展有益,则在后续的终端使用过程中,向用户推送更多与动物相关的信息。
限制模块505,用于当所述兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
预设兴趣匹配度可以是一个数值,当计算得到的兴趣匹配度不大于预设兴趣匹配度时,则证明兴趣匹配度低,当兴趣匹配度为0时,则证明该内容完全不适合用户,例如,当儿童在浏览对少儿身心发展 有害的内容时,直接将兴趣匹配度赋值为0,可以限制该儿童浏览少儿不宜的内容,当儿童在浏览一些对于儿童身心发展无益的内容时,减少向用户推送与该内容关联的内容。
本申请实施例还提供一种电子设备。所述电子设备可以是智能手机、平板电脑、游戏设备、增强现实(Augmented Reality,AR)设备、汽车、数据存储装置、音频播放装置、视频播放装置、笔记本、桌面计算设备、可穿戴设备诸如手表、眼镜、头盔、电子手链、电子项链、电子衣物等设备。其中,所述电子设备中设置有算法模型,所述算法模型包括第一算法模块,所述第一算法模块用于对预设任务进行处理。
参考图7,图7为本申请实施例提供的电子设备600的第一种结构示意图。其中,电子设备600包括处理器601和存储器602。处理器601与存储器302电性连接。
处理器601是电子设备600的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或调用存储在存储器602内的计算机程序,以及调用存储在存储器602内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。
在本实施例中,电子设备600中的处理器601会按照如下的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器602中,并由处理器601来运行存储在存储器602中的计算机程序,从而实现各种功能:
获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度;
获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度;
根据所述兴趣度与所述匹配度计算得到第一兴趣匹配度;
当所述第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;
当所述第一兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
存储器602可用于存储计算机程序和数据。存储器602存储的计算机程序中包含有可在处理器中执行的指令。计算机程序可以组成各种功能模块。处理器601通过调用存储在存储器602的计算机程序,从而执行各种功能应用以及数据处理。
在一些实施例中,参考图8,图8本申请实施例提供的电子设备600的第二种结构示意图。
其中,电子设备600还包括:显示屏603、控制电路604、输入单元605、传感器606以及电源607。其中,处理器601分别与显示屏603、控制电路604、输入单元605、传感器606以及电源607电性连接。
显示屏603可用于显示由用户输入的信息或提供给用户的信息以及电子设备的各种图形用户接口,这些图形用户接口可以由图像、文本、图标、视频和其任意组合来构成。
控制电路604与显示屏603电性连接,用于控制显示屏603显示信息。
输入单元605可用于接收输入的数字、字符信息或用户特征信息(例如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。其中,输入单元605可以包括指纹识别模组。
传感器606用于采集电子设备自身的信息或者用户的信息或者外部环境信息。例如,传感器606可以包括距离传感器、磁场传感器、光线传感器、加速度传感器、指纹传感器、霍尔传感器、位置传感器、陀螺仪、惯性传感器、姿态感应器、气压计、心率传感器等多个传感器。
电源607用于给电子设备600的各个部件供电。在一些实施例中,电源607可以通过电源管理系统与处理器601逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图8中未示出,电子设备600还可以包括摄像头、蓝牙模块等,在此不再赘述。
由上可知,本申请实施例提供了一种电子设备,所述电子设备执行以下步骤:
获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度;
获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度;
根据所述兴趣度与所述匹配度计算得到第一兴趣匹配度;
当所述第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;
当所述第一兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
通过根据用户兴趣度以及应用的匹配度计算应用信息的兴趣匹配度,可以根据用户的兴趣给用户推荐其感兴趣并且合适的内容。
本申请实施例还提供一种存储介质,所述存储介质中存储有计算机程序,所述计算机程序用于对信息的处理,当所述计算机程序在计算机上运行时,所述计算机执行上述任一实施例所述的信息处理方法。
例如,在一些实施例中,当所述计算机程序在计算机上运行时,所述计算机执行以下步骤:
获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度;
获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度;
根据所述兴趣度与所述匹配度计算得到第一兴趣匹配度;
当所述第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;
当所述第一兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
需要说明的是,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过计算机程序来指令相关的硬件来完成,所述计算机程序可以存储于计算机可读存储介质中,所述存储介质可以包括但不限于:只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁盘或光盘等。
以上对本申请实施例所提供的信息处理方法、装置、存储介质及电子设备进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种信息处理方法,包括:
    获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度;
    获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度;
    根据所述兴趣度与所述匹配度计算得到第一兴趣匹配度;
    当所述第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;以及
    当所述第一兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
  2. 根据权利要求1所述的信息处理方法,其中,所述获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度包括:
    获取用户使用终端应用时的情绪信息,所述情绪信息包括用户使用终端应用时的人脸图像信息和/或音频信息;
    分别对所述人脸图像信息和/或所述音频进行特征抽取,得到人脸特征信息和/或音频特征信息;
    根据所述人脸特征信息和/或所述音频特征信息得到所述兴趣度。
  3. 根据权利要求1所述的信息处理方法,其中,所述获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度包括:
    获取用户使用终端应用时的情绪信息,所述情绪信息包括人脸图像信息、音频信息、屏幕触摸信息和边沿压力信息;
    分别对所述人脸图像信息、所述音频信息、所述屏幕触摸信息和所述边沿压力信息进行特征抽取,得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息;
    根据所述人脸特征信息、所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息得到所述兴趣度。
  4. 根据权利要求3所述的信息处理方法,其中,所述分别对所述人脸图像信息、所述音频信息、所述屏幕触摸信息和所述边沿压力信息进行特征抽取,得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息包括:
    对所述人脸图像信息通过预设图像算法得到人脸位置信息,通过硬聚类算法对所述人脸位置信息进行情绪分类,得到人脸特征信息;
    对所述音频信息使用第一音频算法获得频谱图,通过第二音频算法对所述频谱图进行分析得到音频特征信息;
    对所述屏幕触摸信息和所述边沿压力信息使用随机森林分页器或贝叶斯分类器建立模型,得到屏幕触摸特征信息和边沿压力特征信息。
  5. 根据权利要求4所述的信息处理方法,其中,所述对所述人脸图像信息通过预设图像算法得到人脸位置信息包括:
    对所述人脸图像信息通过支持向量机算法和边框回归算法得到人脸位置信息。
  6. 根据权利要求4所述的信息处理方法,其中,所述对所述人脸图像信息通过预设图像算法得到人脸位置信息包括:
    对所述人脸图像信息通过卷积神经网络算法模型得到人脸位置信息。
  7. 根据权利要求4所述的信息处理方法,其中,所述第一音频算法为快速傅式变换算法和梅尔频率倒谱系数算法中的一项,所述第二音频算法为循环神经网络模型算法或时序分析算法中的一项。
  8. 根据权利要求3所述的信息处理方法,其中,所述根据所述人脸特征信息、所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息得到所述兴趣度包括:
    将所述人脸特征信息作为主要兴趣度;
    将所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息中的至少一项作为次要兴趣度;
    利用所述次要兴趣度对所述主要兴趣度进行调节,得到调节后的兴趣度。
  9. 根据权利要求1所述的信息处理方法,其中,所述获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度包括:
    获取当前被用户查看的应用信息,所述应用信息包括应用类型标签信息;
    当所述应用类型标签在预设应用类型标签范围内时,对屏幕画面信息进行分析,得到第一匹配度;
    当所述应用类型标签不在预设应用类型标签范围内时,将所述兴趣匹配度设为第二兴趣匹配度。
  10. 根据权利要求9所述的信息处理方法,其中,所述当所述应用类型标签在预设应用类型标签范围内时,对所述屏幕画面信息进行分析,得到第一匹配度包括:
    当所述应用类型标签在预设应用类型标签范围内时,对所述屏幕画面使用卷积神经网络进行分析,得到第一匹配度;
    当所述第一匹配度小于所述预设匹配度时,将所述兴趣匹配度设为第三兴趣匹配度,所述第三兴趣匹配度大于所述第二兴趣匹配度;
    当所述第一匹配度不小于所述预设匹配度时,将所述第一匹配度用于兴趣匹配度的计算;
    所述根据所述兴趣度与所述匹配度计算得到兴趣匹配度包括:
    将所述兴趣度与所述第一匹配度相乘,得到第一兴趣匹配度。
  11. 根据权利要求1所述的信息处理方法,其中,所述获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度包括:
    获取当前被用户查看的应用信息,所述应用信息包括应用类型标签信息;
    当所述应用类型标签在预设应用类型标签范围内时,对屏幕画面信息和应用音频信息进行分析,得到第二匹配度和第三匹配度;
    当所述应用类型标签不在预设应用类型标签范围内时,将所述兴趣匹配度设为第二兴趣匹配度。
  12. 根据权利要求11所述的信息处理方法,其中,所述对屏幕画面信息和应用音频信息进行分析, 得到第二匹配度和第三匹配度包括:
    通过卷积神经网络分别对所述屏幕画面信息和所述应用音频信息进行分析,得到对应的第二匹配度和第三匹配度;
    当所述第二匹配度和所述第三匹配度中的一个小于预设匹配度时,将兴趣匹配度设为第三兴趣匹配度,所述第三兴趣匹配度大于所述第二兴趣匹配度;
    当所述第二匹配度和所述第三匹配度都不小于所述预设匹配度时,将所述第二匹配度与所述第三匹配度用于兴趣匹配度的计算;
    所述根据所述兴趣度与所述匹配度计算得到兴趣匹配度包括:
    将所述兴趣度与所述第二匹配度和第三匹配度相乘,得到第一兴趣匹配度。
  13. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述程序被处理器执行时,实现如权利要求1至12任一项所述的信息处理方法的步骤。
  14. 一种电子设备,包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器与所述存储器连接,所述计算机程序用于对信息处理,所述处理器通过调用所述存储器中存储的所述计算机程序,所述处理器执行:
    获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度;
    获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度;
    根据所述兴趣度与所述匹配度计算得到第一兴趣匹配度;
    当所述第一兴趣匹配度大于预设兴趣匹配度时,推送与所述应用信息关联的内容;以及
    当所述第一兴趣匹配度不大于所述预设兴趣匹配度时,减少推送与所述应用信息关联的内容,或限制获取与所述应用信息关联的内容。
  15. 根据权利要求14所述的电子设备,其中,在获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度中,所述处理器还执行:
    获取用户使用终端应用时的情绪信息,所述情绪信息包括用户使用终端应用时的人脸图像信息和/或音频信息;
    分别对所述人脸图像信息和/或所述音频进行特征抽取,得到人脸特征信息和/或音频特征信息;
    根据所述人脸特征信息和/或所述音频特征信息得到所述兴趣度。
  16. 根据权利要求14所述的电子设备,其中,在获取所述用户的情绪信息,根据所述情绪信息得到所述用户对所述应用信息的兴趣度中,所述处理器还执行:
    获取用户使用终端应用时的情绪信息,所述情绪信息包括人脸图像信息、音频信息、屏幕触摸信息和边沿压力信息;
    分别对所述人脸图像信息、所述音频信息、所述屏幕触摸信息和所述边沿压力信息进行特征抽取,得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息;
    根据所述人脸特征信息、所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息得到 所述兴趣度。
  17. 根据权利要求16所述的电子设备,其中,在分别对所述人脸图像信息、所述音频信息、所述屏幕触摸信息和所述边沿压力信息进行特征抽取,得到人脸特征信息、音频特征信息、屏幕触摸特征信息和边沿压力特征信息中,所述处理器还执行:
    对所述人脸图像信息通过预设图像算法得到人脸位置信息,通过硬聚类算法对所述人脸位置信息进行情绪分类,得到人脸特征信息;
    对所述音频信息使用第一音频算法获得频谱图,通过第二音频算法对所述频谱图进行分析得到音频特征信息;
    对所述屏幕触摸信息和所述边沿压力信息使用随机森林分页器或贝叶斯分类器建立模型,得到屏幕触摸特征信息和边沿压力特征信息。
  18. 根据权利要求16所述的电子设备,其中,在根据所述人脸特征信息、所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息得到所述兴趣度中,所述处理器还执行:
    将所述人脸特征信息作为主要兴趣度;
    将所述音频特征信息、所述屏幕触摸特征信息和所述边沿压力特征信息中的至少一项作为次要兴趣度;
    利用所述次要兴趣度对所述主要兴趣度进行调节,得到调节后的兴趣度。
  19. 根据权利要求14所述的电子设备,其中,在获取当前被用户查看的应用信息,根据所述应用信息得到所述用户与所述应用信息的匹配度中,所述处理器还执行:
    获取当前被用户查看的应用信息,所述应用信息包括应用类型标签信息;
    当所述应用类型标签在预设应用类型标签范围内时,对屏幕画面信息进行分析,得到第一匹配度;
    当所述应用类型标签不在预设应用类型标签范围内时,将所述兴趣匹配度设为第二兴趣匹配度。
  20. 根据权利要求19所述的电子设备,其中,在当所述应用类型标签在预设应用类型标签范围内时,对屏幕画面信息进行分析,得到第一匹配度中,所述处理器还执行:
    当所述应用类型标签在预设应用类型标签范围内时,对所述屏幕画面使用卷积神经网络进行分析,得到第一匹配度;
    当所述第一匹配度小于所述预设匹配度时,将所述兴趣匹配度设为第三兴趣匹配度,所述第三兴趣匹配度大于所述第二兴趣匹配度;
    当所述第一匹配度不小于所述预设匹配度时,将所述第一匹配度用于兴趣匹配度的计算;
    所述根据所述兴趣度与所述匹配度计算得到兴趣匹配度包括:
    将所述兴趣度与所述第一匹配度相乘,得到第一兴趣匹配度。
PCT/CN2020/082465 2019-04-09 2020-03-31 信息处理方法、存储介质及电子设备 WO2020207297A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910282187.1 2019-04-09
CN201910282187.1A CN111797303A (zh) 2019-04-09 2019-04-09 信息处理方法、装置、存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2020207297A1 true WO2020207297A1 (zh) 2020-10-15

Family

ID=72750886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082465 WO2020207297A1 (zh) 2019-04-09 2020-03-31 信息处理方法、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN111797303A (zh)
WO (1) WO2020207297A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364241A (zh) * 2020-10-27 2021-02-12 北京五八信息技术有限公司 信息推送的方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965890A (zh) * 2015-06-17 2015-10-07 深圳市腾讯计算机系统有限公司 广告推荐的方法和装置
US20150339539A1 (en) * 2012-12-31 2015-11-26 Xiaodong Gu Method and system for determining concentration level of a viewer of displayed content
CN105700682A (zh) * 2016-01-08 2016-06-22 北京乐驾科技有限公司 基于视觉和语音的智能性别、情绪识别检测系统及方法
CN108304458A (zh) * 2017-12-22 2018-07-20 新华网股份有限公司 根据用户情绪的多媒体内容推送方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150339539A1 (en) * 2012-12-31 2015-11-26 Xiaodong Gu Method and system for determining concentration level of a viewer of displayed content
CN104965890A (zh) * 2015-06-17 2015-10-07 深圳市腾讯计算机系统有限公司 广告推荐的方法和装置
CN105700682A (zh) * 2016-01-08 2016-06-22 北京乐驾科技有限公司 基于视觉和语音的智能性别、情绪识别检测系统及方法
CN108304458A (zh) * 2017-12-22 2018-07-20 新华网股份有限公司 根据用户情绪的多媒体内容推送方法和系统

Also Published As

Publication number Publication date
CN111797303A (zh) 2020-10-20

Similar Documents

Publication Publication Date Title
US11126853B2 (en) Video to data
WO2020238023A1 (zh) 信息推荐方法、装置、终端及存储介质
WO2019233219A1 (zh) 对话状态确定方法及装置、对话系统、计算机设备、存储介质
WO2020048308A1 (zh) 多媒体资源分类方法、装置、计算机设备及存储介质
CN109800325A (zh) 视频推荐方法、装置和计算机可读存储介质
CN109558512A (zh) 一种基于音频的个性化推荐方法、装置和移动终端
CN111814475A (zh) 用户画像构建方法、装置、存储介质和电子设备
CN113515942A (zh) 文本处理方法、装置、计算机设备及存储介质
CN111797850A (zh) 视频分类方法、装置、存储介质及电子设备
CN111797854A (zh) 场景模型建立方法、装置、存储介质及电子设备
CN111797851A (zh) 特征提取方法、装置、存储介质及电子设备
CN111796926A (zh) 指令执行方法、装置、存储介质及电子设备
CN111798367A (zh) 图像处理方法、装置、存储介质及电子设备
CN111797873A (zh) 场景识别方法、装置、存储介质及电子设备
WO2020207297A1 (zh) 信息处理方法、存储介质及电子设备
CN111796925A (zh) 算法模型的筛选方法、装置、存储介质和电子设备
WO2020207252A1 (zh) 数据存储方法、装置、存储介质及电子设备
CN113569042A (zh) 文本信息分类方法、装置、计算机设备及存储介质
CN111816211B (zh) 情绪识别方法、装置、存储介质及电子设备
CN111797856A (zh) 建模方法、装置、存储介质及电子设备
CN111797261A (zh) 特征提取方法、装置、存储介质及电子设备
CN111797849A (zh) 用户活动识别方法、装置、存储介质及电子设备
CN111797867A (zh) 系统资源优化方法、装置、存储介质及电子设备
CN113486260B (zh) 互动信息的生成方法、装置、计算机设备及存储介质
WO2020207294A1 (zh) 服务处理方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20788634

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20788634

Country of ref document: EP

Kind code of ref document: A1