CN115238171A - Information pushing method and device, computer equipment and storage medium - Google Patents

Information pushing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115238171A
CN115238171A CN202210735771.XA CN202210735771A CN115238171A CN 115238171 A CN115238171 A CN 115238171A CN 202210735771 A CN202210735771 A CN 202210735771A CN 115238171 A CN115238171 A CN 115238171A
Authority
CN
China
Prior art keywords
target
heart rate
emotion
data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210735771.XA
Other languages
Chinese (zh)
Inventor
李飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210735771.XA priority Critical patent/CN115238171A/en
Publication of CN115238171A publication Critical patent/CN115238171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an information pushing method, an information pushing device, computer equipment and a storage medium, wherein the method comprises the steps of acquiring target heart rate data of a target user and determining the data acquisition period of the target heart rate data; inputting the target heart rate data into a preset emotion recognition model so as to determine a target emotion label corresponding to the target user according to the target heart rate data through the preset emotion recognition model; the preset emotion recognition model corresponds to the data acquisition period; acquiring a user personality portrait of the target user, and inputting the user personality portrait and the target emotion label into a preset recommendation model to obtain target recommendation information; and pushing the target recommendation information to a client. The invention improves the accuracy of information push.

Description

Information pushing method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of user portrayal, in particular to an information pushing method and device, computer equipment and a storage medium.
Background
With the development of science and technology, more and more services of online sales modes gradually develop online sales modes, such as insurance industry, e-commerce industry or service industry.
In the prior art, after a user portrait is carved by combining basic information (such as age or occupation) and behavior characteristics (such as historical purchased products or historical browsing records) of a user to be recommended, information is pushed to the user to be recommended according to the user portrait. However, in some special scenes (such as insurance industry), the user portrait portrayal portrayed by the method is not accurate enough, and the accuracy rate of information push is low.
Disclosure of Invention
The embodiment of the invention provides an information pushing method and device, computer equipment and a storage medium, and aims to solve the problem that in the prior art, a user portrait carved out is not accurate enough, so that the accuracy of information pushing is low.
An information push method, comprising:
acquiring target heart rate data of a target user, and determining a data acquisition period of the target heart rate data;
inputting the target heart rate data into a preset emotion recognition model so as to determine a target emotion label corresponding to the target user according to the target heart rate data through the preset emotion recognition model; the preset emotion recognition model corresponds to the data acquisition period;
acquiring a user personality portrait of the target user, and inputting the user personality portrait and the target emotion label into a preset recommendation model to obtain target recommendation information;
and pushing the target recommendation information to a client.
An information pushing apparatus comprising:
the data acquisition module is used for acquiring target heart rate data of a target user and determining a data acquisition cycle of the target heart rate data;
the emotion portrait determining module is used for inputting the target heart rate data into a preset emotion recognition model so as to determine a target emotion label corresponding to the target user according to the target heart rate data through the preset emotion recognition model; the preset emotion recognition model corresponds to the data acquisition period;
the recommendation information prediction module is used for acquiring the user personality portrait of the target user and inputting the user personality portrait and the target emotion label into a preset recommendation model to obtain target recommendation information;
and the information pushing module is used for pushing the target recommendation information to a client.
A computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the information pushing method when executing the computer program.
A computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the above-described information push method.
According to the information pushing method, the device, the computer equipment and the storage medium, the target heart rate data of the target user is obtained, so that emotion recognition is carried out through the preset emotion recognition model matched with the data acquisition period corresponding to the target heart rate data, and the target emotion label of the target user is constructed. The dimension information of a user is added for the subsequent generation of the target recommendation information, and the accuracy and comprehensiveness of information push are improved. Through the combination of the user individual image and the target emotion label, the characteristics of multiple dimensions of the target user can be accurately described, and therefore the accuracy of information pushing is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a diagram of an application environment of an information pushing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an information pushing method according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of an information pushing apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making an invasive task, are within the scope of the present invention.
The information pushing method provided by the embodiment of the invention can be applied to the application environment shown in fig. 1. Specifically, the information pushing method is applied to an information pushing system, the information pushing system comprises a client and a server shown in fig. 1, and the client and the server communicate through a network to solve the problem that the user portrait portrayed in the prior art is not accurate enough, so that the accuracy of information pushing is low. The client is also called a user side, and refers to a program corresponding to the server and providing local services for the user. The client may be installed on, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster comprised of multiple servers. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
In an embodiment, as shown in fig. 2, an information pushing method is provided, which is described by taking the server in fig. 1 as an example, and includes the following steps:
s10: target heart rate data of a target user is obtained, and a data acquisition period of the target heart rate data is determined.
It can be understood that the target heart rate data is the heartbeat data of the target user acquired through the heart rate acquisition module in the device such as the smart band. That is, the target heart rate data includes the heartbeat data of the target user at a plurality of different time instants. The data acquisition period represents the acquisition time of the acquired target heart rate data. Illustratively, the data collection period is one week when the target heart rate data is heartbeat data of each day of the week of the target user. When the target heart rate data is the heartbeat data of each day of a month of the target user, the data acquisition period is one month.
Specifically, the target heart rate data of the target user may be acquired through, for example, a smart band or other heartbeat collecting device, and then the target heart rate data is sent to the server through a client associated with the smart band or other heartbeat collecting device. After the server acquires the target heart rate data, the data acquisition period can be determined according to the acquisition time of the target heart rate data.
S20: inputting the target heart rate data into a preset emotion recognition model so as to determine a target emotion label corresponding to the target user according to the target heart rate data through the preset emotion recognition model; the preset emotion recognition model corresponds to the data acquisition period.
The preset emotion recognition model is used for determining the emotion state of the target user according to the target heart rate data, and then constructing the target emotion label of the target user. The target emotion label characterizes the emotional state of the target user. In this embodiment, the preset emotion recognition model is constructed based on a recurrent neural network. Therefore, the preset emotion recognition model can learn the characteristic continuity between the heartbeat data adjacent in time, and the accuracy of emotion recognition is improved. Further, the preset emotion recognition model of the present embodiment includes a long-term emotion recognition model and a short-term emotion recognition model. The long-term emotion recognition model is used for emotion recognition based on target heart rate data with a long period (e.g., one or two months). The short term emotion recognition model is used for emotion recognition based on target heart rate data for a short period (e.g., one or two days). Thus, either the long-term emotion recognition model or the short-term emotion recognition model may be selected according to the data acquisition period of the target heart rate data.
Specifically, after target heart rate data of a target user are obtained and a data acquisition cycle of the target heart rate data is determined, a preset emotion recognition model corresponding to the data acquisition cycle can be constructed. And inputting the target heart rate data into a preset emotion recognition model, and further performing data processing on the target heart rate data through the preset emotion recognition model, namely determining a target emotion state corresponding to the target user according to the continuity between each sub-data in the target heart rate data through the preset emotion recognition model. And constructing a target emotion label of the target user according to the target emotion state and the target heart rate data.
S30: and acquiring the user personality portrait of the target user, and inputting the user personality portrait and the target emotion label into a preset recommendation model to obtain target recommendation information.
Understandably, the user personality representation is the personalized information that characterizes the target user. Illustratively, the user personality portrait may be constructed according to basic information (such as gender, age or occupation) and behavior information (such as historical purchasing information or historical browsing information) of the target user. The preset recommendation model can be constructed by using a neural network. The preset recommendation model is used for recommending the target user according to the user individual image and the target emotion label to obtain target recommendation information. The target recommendation information may be diet recommendation information, exercise recommendation information, or insurance recommendation information, for example.
Specifically, after a target emotion label corresponding to the target user is determined according to the target heart rate data through the preset emotion recognition model, a user personality portrait of the target user is obtained, and the user personality portrait and the target emotion label are input into a preset recommendation model. And combining the user individual image and the target emotion label through a preset recommendation model to obtain all characteristics of the target user, so as to obtain target recommendation information based on the characteristics corresponding to the user individual image and the characteristics corresponding to the target emotion label. For example, assuming that the user personality image describes that the target user is a senior, and the target emotion tag describes that the target user is a fast-heartbeat excited emotion, the target recommendation information may include a light diet list, relaxing music, or insurance products corresponding to senior.
Further, a recommendation information mapping table may be stored in the preset recommendation model, where the recommendation information mapping table includes a plurality of sample recommendation groups. A sample recommendation group comprises sample personality characteristics, sample emotion characteristics and sample recommendation information, and a sample recommendation group is associated with a recommendation value. And then after the target individual characteristics corresponding to the user individual images and the target emotion characteristics corresponding to the target emotion labels are determined, the target individual characteristics and the target emotion characteristics can be matched with all the sample recommendation groups. And inquiring a sample recommendation group which simultaneously contains the sample personality characteristics matched with the target personality characteristics and the sample emotion characteristics matched with the target emotion characteristics. If only one sample recommendation group exists, the sample recommendation information contained in the sample recommendation group can be used as the target recommendation information. If the queried sample recommendation groups contain a plurality of sample recommendation groups, the sample recommendation information contained in the sample recommendation group with the highest recommendation value can be selected from all queried sample recommendation groups as the target recommendation information.
S40: and pushing the target recommendation information to a client.
Specifically, after the user personalized image and the target emotion label are input into a preset recommendation model to obtain target recommendation information, the target recommendation information is fed back to a client to complete recommendation to a target user. The client may be a smart phone or a notebook computer of the target user, or may be a smart phone or a notebook computer of a third-party user (e.g., a relative of the target user or a third-party platform).
In this embodiment, the target emotion label of the target user is constructed by acquiring the target heart rate data of the target user and performing emotion recognition through a preset emotion recognition model matched with a data acquisition cycle corresponding to the target heart rate data. The dimension information of a user is added for the subsequent generation of the target recommendation information, and the accuracy and comprehensiveness of information push are improved. Through the combination of the user individual image and the target emotion label, the characteristics of multiple dimensions of the target user can be accurately described, and therefore the accuracy of information push is improved.
In an embodiment, before step S20, that is, before the target heart rate data is input to the preset emotion recognition model, the method further includes:
(1) And determining the acquisition frequency type of the target heart rate data according to the data acquisition period, and sending the acquisition frequency type to a central server.
As will be appreciated, the acquisition frequency type characterizes the frequency of acquisition of the target heart rate data. The defined acquisition frequency types in this embodiment include a long-term acquisition type and a short-term acquisition type. For example, assuming that the target heart rate data is the heartbeat data of each day of a month of the target user, the data collection period is a month, and the collection frequency type is a long-term collection type. If the target heart rate data is heartbeat data of each hour in a day of a target user, the data acquisition period is one day, and the acquisition frequency type is a short-term acquisition type.
Specifically, after the data acquisition period of the target heart rate data is determined, because the target heart rate data includes the heartbeat data of the target user at a plurality of different moments, the frequency of acquiring the heartbeat data of the target user at this time can be determined according to the total amount of the heartbeat data included in the data acquisition period and the target heart rate data. And thus the type of acquisition frequency of the target heart rate data is determined according to the frequency.
(2) And receiving target model parameters corresponding to the acquisition frequency type and determined by the central server.
It can be understood that the information pushing method in this embodiment may be applied in a federal learning scenario. The server in this embodiment is one of the local servers in the federal learning scenario. The central server is a server which distributes the model parameters to different local servers or receives the updated model parameters fed back by the local servers. In the central server, a long-term emotion recognition model (long-term emotion recognition model having long-term model parameters) and a short-term emotion recognition model (short-term emotion recognition model having short-term model parameters) are obtained by training with a plurality of sample heart rate data in advance. And then after the acquisition frequency type of the target heart rate data is determined, the corresponding preset emotion recognition model can be determined to be a long-term emotion recognition model or a short-term emotion recognition model according to the acquisition frequency type. Therefore, when the acquisition frequency type determines that the model is the long-term emotion recognition model, the target model parameters are the long-term model parameters. When the acquisition frequency type determines that the short-term emotion recognition model is used, the target model parameters are short-term model parameters.
(3) And constructing the preset emotion recognition model based on the target model parameters.
Specifically, after the central server determines the target model parameters corresponding to the acquisition frequency types, the central server feeds the target model parameters back to the local server. And then after the target model parameters are received, constructing a preset emotion recognition model based on the target model parameters. That is, when the target model parameter is a short-term model parameter, the preset emotion recognition model is the short-term emotion recognition model. And when the target model parameters are long-term model parameters, the preset emotion recognition model is the long-term emotion recognition model.
In this embodiment, long-term emotion recognition models and short-term emotion recognition models of different dimensions are trained in advance according to sample heart rate data through a central server. And then after the acquisition frequency type of the target heart rate data is determined, the central server can feed back target model parameters corresponding to the acquisition frequency type, and a preset emotion recognition model is constructed according to the target model parameters. Therefore, emotion recognition can be carried out through the preset emotion recognition model matched with the target heart rate data, and the accuracy of emotion recognition is improved. The emotion recognition model is trained through the central server, so that the training pressure of the local server can be reduced.
In an embodiment, before the inputting the target heart rate data into the preset emotion recognition model, the method further includes:
(1) Acquiring a sample data set; the sample data set comprises at least one set of sample heart rate data; a set of the sample heart rate data corresponds to a sample emotion label and a sample frequency type.
It is to be understood that the sample data set comprises at least one set of sample heart rate data. The sample heart rate data is the heart rate data of different users acquired through the intelligent bracelet or other heart rate data acquisition equipment. The sample emotion label is a preset emotion state. The sample frequency type is an acquisition frequency characterizing the sample heart rate data, and may be a long-term acquisition type or a short-term acquisition type. For example, the emotional state of the user is collected in advance, a sample emotional tag is generated according to the emotional state, historical heartbeat data of the user is collected to serve as sample heart rate data, and then the sample frequency type is determined according to the collection period and the collection frequency of the collected sample heart rate data. Sample emotion labels, sample heart rate data, and sample frequency types may be associated in this manner.
(2) And inputting the sample heart rate data into an initial identification model containing initial parameters to obtain a predicted emotion label corresponding to the sample heart rate data.
It is understood that the initial recognition model is a model constructed based on a recurrent neural network. The initial parameters are model parameters when the initial recognition model is built. Specifically, after the sample data set is acquired, the sample heart rate data is input into the initial recognition model. And identifying the characteristic relation between the heartbeat data at continuous moments in the sample heart rate data through the initial identification model, so as to determine the emotion state corresponding to the sample heart rate data, namely the predicted emotion label.
(3) And determining an emotion recognition loss value corresponding to the initial recognition model according to the sample emotion label and the predicted emotion label.
Specifically, after the predicted emotion label corresponding to the sample heart rate data is determined by the initial recognition model, the emotion recognition loss value between the sample emotion label and the predicted emotion label may be determined by a cross entropy loss function or the like. The emotion recognition loss value represents the difference between the sample emotion label and the predicted emotion label, and then the initial recognition model can be updated through the emotion recognition loss value, so that the emotion recognition accuracy of the initial recognition model is improved.
(4) And carrying out iterative updating on the initial parameters of the initial identification model according to the emotion identification loss values corresponding to the sample heart rate data with the same sample frequency type to obtain a preset emotion identification model corresponding to the sample frequency type.
It is to be understood that the preset emotion recognition models defined in the present invention are indicated in the above description to include a long-term emotion recognition model as well as a short-term emotion recognition model. And the long-term emotion recognition model corresponds to a long-term acquisition type, and the short-term emotion recognition model corresponds to a short-term acquisition type. Each set of sample heart rate data corresponds to a sample frequency type, and the sample frequency type is defined as a long-term acquisition type or a short-term acquisition type in this embodiment.
Further, in the process of training the initial recognition model, the initial parameters of the initial recognition model can be iteratively updated through the emotion recognition loss values corresponding to the sample heart rate data with the same sample frequency type, so that the preset emotion recognition model corresponding to the sample frequency type is obtained. For example, the initial parameters of the initial identification model are iteratively updated through the emotion recognition loss values corresponding to the sample heart rate data with the long-term collection type, and the obtained preset emotion recognition model is the long-term emotion recognition model. And iteratively updating the initial parameters of the initial recognition model through the emotion recognition loss value corresponding to the sample heart rate data with the short-term collection type, and then obtaining a preset emotion recognition model which is the short-term emotion recognition model. That is, two initial recognition models may be set in this embodiment. One of the initial recognition models is trained through sample heart rate data with a long-term collection type, and then a long-term emotion recognition model is obtained. And the other initial recognition model is trained through sample heart rate data with a short-term collection type, so that a short-term emotion recognition model is obtained.
In one embodiment, the inputting the sample heart rate data into an initial identification model including initial parameters to obtain a predicted emotion label corresponding to the sample heart rate data includes:
(1) Carrying out data division on the sample heart rate data to obtain heart rate subdata; one of the heart rate sub-data is associated with one heart rate acquisition frame.
It can be understood that the sample heart rate data is generated by collecting the heartbeat data of the user at different time instants, that is, the sample heart rate data includes the heartbeat data of a plurality of different time instants, that is, the heart rate sub-data of different time instants. The heart rate acquisition frame is a time frame for acquiring the heart rate sub-data. Illustratively, assume that the sample heart rate data is heart beat data collected for each hour of the user's day. And then this heart rate subdata is the heartbeat data that every interval one hour was gathered, contains 24 heart rate subdata promptly in this sample heart rate data.
(2) And sequentially splitting the heart rate sub-data into a preset selection sequence according to the sequence of the heart rate acquisition frames, and recording the heart rate sub-data which is sequenced first in the preset selection sequence as first sub-data.
Specifically, after the heart rate subdata is obtained by data division of the heart rate data of the sample, the heart rate subdata can be sequentially inserted into the preset value selection sequence according to the sequence of the heart rate acquisition frames. Illustratively, assume that sample heart rate data is collected for a user's heartbeat data for a day, and that heart rate sub-data is collected every hour (hour). Therefore, the heart rate acquisition frame corresponding to the first heart rate subdata is the zero point; and the heart rate acquisition frame corresponding to the last heart rate subdata is twenty-four points. The acquisition interval between each heart rate data is one hour. Therefore, the first heart rate subdata sequenced in the preset selection sequence is the heart rate subdata corresponding to the zero-point heart rate acquisition frame, and the last heart rate subdata sequenced in the preset selection sequence is the heart rate subdata corresponding to the twenty-four-point heart rate acquisition frame.
(3) And inputting the first sub data into the initial identification model to obtain a first hidden layer state corresponding to the first sub data.
It is to be understood that the initial recognition model is stated in the above description as being constructed based on a recurrent neural network. Therefore, the first subdata is input into the initial identification model, and the first hidden layer state corresponding to the first subdata is obtained.
(4) Recording second ordered heart rate subdata in the preset selection sequence as second subdata, inputting the second subdata into the initial identification model, and determining a second hidden layer state according to the second subdata and the first hidden layer state.
Illustratively, assume that sample heart rate data is acquired for a user's heartbeat data during a day, and that heart rate sub-data is acquired for each hour (hour). And presetting the first heart rate subdata in the sequence to be the heart rate subdata corresponding to the zero heart rate acquisition frame. The second heart rate subdata sequenced in the preset selection sequence is the heart rate subdata corresponding to the heart rate acquisition frame at a point in the morning.
Furthermore, because the initial recognition model is constructed based on the recurrent neural network, the subsequent neural network unit in the two adjacent neural network units is influenced by the previous neural network unit, that is, the previous neural network unit transmits the hidden layer state of the previous neural network unit to the adjacent subsequent neural network unit, so that the subsequent neural network unit can learn the characteristics of the previous neural network unit, and the model training efficiency and accuracy are improved.
(5) And continuously and sequentially determining the data hidden layer state corresponding to the heart rate subdata except the first subdata and the second subdata in the preset selection sequence, and recording the data hidden layer state corresponding to the heart rate subdata sequenced at the last in the preset selection sequence as the end-point hidden layer state.
It will be appreciated that the number of neural network elements in the initial recognition model is the same as the number of heart rate sub-data. The preceding neural network unit in two adjacent neural network units can transmit the hidden layer state to the following neural network unit, and then the following neural network unit calculates based on the hidden layer state of the preceding neural network unit and the sub-data of the heart rate input to the following neural network unit. For example, the hidden layer state corresponding to the third sub-data (the third sub-data is the heart rate sub-data ranked third in the preset selection sequence) is determined based on the third sub-data and the second hidden layer state.
Further, the data hidden layer state corresponding to the last heart rate sub-data sequenced in the preset selection sequence is determined by the data hidden layer state corresponding to the end point sub-data (the end point sub-data is the last heart rate sub-data sequenced in the preset selection sequence) and the previous sub-data (the previous sub-data is the heart rate sub-data sequenced in the preset selection sequence before the end point sub-data). Therefore, the continuous relations among the characteristics of all the sub-data of the heart rate in the preset selection sequence can be correlated, and the emotion recognition accuracy is improved.
(6) And determining a predicted emotion label corresponding to the sample heart rate data according to the state of the endpoint hidden layer.
Specifically, after the state of the data hidden layer corresponding to the last ordered heart rate subdata in the preset selection sequence is recorded as the state of the terminal hidden layer, the heart rate data of the sample can be determined to be predicted according to the state of the terminal hidden layer, and a predicted emotion label is obtained.
In an embodiment, the obtaining the user personality portrait of the target user includes:
(1) And acquiring a user identifier corresponding to the target user, and acquiring user basic information corresponding to the user identifier from a preset third-party platform.
It is understood that the user identifier is a unique identifier of the target user, and the user identifier may be generated by an identification number or a bank card number of the target user. The preset third-party platform can be a social platform or a payment platform used by the target user. The user basic information may be age, gender, occupation or medical information of the target user, and the like.
(2) And inputting the user basic information into a preset portrait generating model, and performing information processing on the user basic information through the preset portrait generating model to obtain a user portrait value.
It is understood that the preset portrait generation model may be a model constructed based on CNN (Convolutional Neural Network), lightGBM (Light Gradient Boosting Machine), XGBoost, or other Neural Network.
Specifically, after user basic information corresponding to the user identification is obtained from a preset third-party platform, the user basic information input value is preset in the portrait generation model. And classifying the target users according to the user basic information through a preset portrait generation model so as to determine the probability that the target users belong to different types of crowds, and obtaining the user portrait value. The user image value represents the probability that the target user belongs to different categories of people.
(3) Historical user representations are obtained and historical representation thresholds are determined from the historical user representations.
As can be appreciated, the historical user representation is a representation of different categories of people. Historical user profiles may be pre-profiled for different categories of users. The historical image threshold is the average value of the image values of the same category of people.
Specifically, after the historical user portraits are obtained, the historical user portraits can be subjected to crowd clustering, and the historical user portraits belonging to the same category of crowd are grouped into one group. And further carrying out average calculation on the image values of all historical user images in the same group to obtain the corresponding historical image threshold value of each group.
(4) And determining the user personality portrait according to the user portrait value and the historical portrait threshold value.
Specifically, after a historical representation threshold is determined from the historical user representation, the user representation value may be compared to the historical representation threshold. If the user portrait exceeds one historical portrait threshold, a user personality portrait can be depicted according to the historical user portrait corresponding to the historical portrait threshold, and the target user is determined to be the crowd category of the historical user portrait corresponding to the historical portrait threshold.
In an embodiment, after the pushing the target recommendation information to the target user, the method further includes:
and receiving recommendation evaluation information fed back by the target user aiming at the target recommendation information.
As can be appreciated, after the target recommendation information is pushed to the target user, the target user performs evaluation feedback on the target recommendation information to obtain recommendation evaluation information. The recommendation evaluation information may include an evaluation of the target user on the target recommendation information, where the evaluation includes an unreasonable evaluation and an emotional evaluation proposed by the target user for the target recommendation information.
And determining a feedback emotion label of the target user according to the recommendation evaluation information, and determining a first label distance between the feedback emotion label and the target emotion label.
Specifically, after receiving recommendation evaluation information fed back by the target user for the target recommendation information, determining a feedback emotion label of the target user according to the recommendation evaluation information. If the recommendation evaluation information is the identity target recommendation information, it can be determined that the feedback emotion label is the same as the target emotion label. If the recommendation evaluation information does not agree with the target recommendation information, the emotion of the target user can be inferred according to the evaluation on emotion in the recommendation evaluation information, and then a feedback emotion label is generated. And determining a first label distance between the feedback emotion label and the target emotion label by a Euclidean distance method and the like. And the first label distance is the characteristic difference degree between the representation feedback emotion label and the target emotion label.
Acquiring a central emotion label; and the central emotion label is obtained by performing emotion recognition on the target heart rate data through a central emotion recognition model in a central server. The central emotion recognition model corresponds to the data acquisition period.
It is understood that the execution subject implemented by the present embodiment may be a local server in a federated learning scenario. The central server is a server which distributes the model parameters to different local servers or receives the updated model parameters fed back by the local servers. Although the preset emotion recognition model is constructed based on the target model parameters fed back by the central server, the preset emotion recognition model may be different from the central emotion recognition model (long-term emotion recognition model or short-term emotion recognition model) in the central server. Such as when the preset emotion recognition model also includes other parameters in addition to the target model parameters.
Further, the central server may select a central emotion recognition model (a long-term emotion recognition model or a short-term emotion recognition model) corresponding to the data acquisition cycle of the target heart rate data to perform emotion recognition on the target heart rate data, so as to obtain a central emotion tag. The central server feeds back the central emotion label to the server.
And determining a second tag distance between the central emotion tag and the target emotion tag, and determining a feedback loss value according to the first tag distance and the second tag distance.
Specifically, after the central emotion tag is acquired, a second tag distance between the central emotion tag and the target emotion tag can be determined in a euclidean distance or the like. And determining a feedback loss value through a cross entropy loss function according to the first label distance and the second label distance.
And adjusting the model parameters of the preset emotion recognition model according to the feedback loss value to obtain an adjusted emotion recognition model.
Specifically, after the feedback loss value is determined according to the first tag distance and the second tag distance, the model parameters of the preset emotion recognition model can be adjusted according to the feedback loss value, and the adjusted emotion recognition model is obtained. Therefore, the feedback emotion label corresponding to the recommendation evaluation information fed back by the user and the center emotion label fed back by the center server can be used for adjusting the model parameters of the preset emotion recognition model. Therefore, the emotion recognition capability of the adjusted emotion recognition model is close to that of the central emotion recognition model of the central server, the emotion recognition capability of the adjusted emotion recognition model is more in line with the requirements of target users, and the recommendation accuracy and the user satisfaction degree are improved.
In an embodiment, after the adjusting the model parameter of the preset emotion recognition model according to the feedback loss value to obtain an adjusted emotion recognition model, the method further includes:
and obtaining the adjustment model parameters of the adjustment emotion recognition model.
It can be understood that, in the above description, it is indicated that the model parameter of the preset emotion recognition model is adjusted according to the feedback loss value to obtain an adjusted emotion recognition model, that is, the adjusted model parameter is a parameter obtained by adjusting the model parameter of the preset emotion recognition model according to the feedback loss value.
And sending the adjustment model parameters to the central server so that the central server performs parameter adjustment on the central emotion recognition model according to the adjustment model parameters to obtain a new central emotion recognition model.
Specifically, in the above description, it is indicated that the information pushing method of the present invention may be applied in a federal learning scenario, and then after the local server obtains the adjusted emotion recognition model, the adjusted model parameters of the adjusted emotion recognition model may be sent to the central server. And the central server can perform parameter adjustment on the central emotion recognition model according to the adjustment model parameters fed back by different servers to obtain a new central emotion recognition model. Therefore, in the next information pushing process, a new model parameter of the central emotion recognition model can be requested from the central server, and emotion recognition is carried out on the model constructed according to the new model parameter of the central emotion recognition model, so that the emotion recognition accuracy is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, an information pushing apparatus is provided, where the information pushing apparatus corresponds to the information pushing methods in the foregoing embodiments one to one. As shown in fig. 3, the information pushing apparatus includes a data acquisition module 10, an emotion figure determination module 20, a recommendation information prediction module 30, and an information pushing module 40. The functional modules are explained in detail as follows:
the data acquisition module 10 is used for acquiring target heart rate data of a target user and determining a data acquisition cycle of the target heart rate data;
an emotion portrait determination module 20, configured to input the target heart rate data into a preset emotion recognition model, so as to determine, according to the target heart rate data, a target emotion tag corresponding to the target user through the preset emotion recognition model; the preset emotion recognition model corresponds to the data acquisition period;
the recommendation information prediction module 30 is configured to obtain a user personality portrait of the target user, and input the user personality portrait and the target emotion tag into a preset recommendation model to obtain target recommendation information;
and the information pushing module 40 is used for pushing the target recommendation information to the client.
Preferably, the information pushing device further comprises:
the frequency type determining module is used for determining the acquisition frequency type of the target heart rate data according to the data acquisition period and sending the acquisition frequency type to a central server;
the parameter acquisition module is used for receiving the target model parameters which are determined by the central server and correspond to the acquisition frequency types;
and the model construction module is used for constructing the preset emotion recognition model based on the target model parameters.
Preferably, the information pushing device further comprises:
the sample data acquisition module is used for acquiring a sample data set; the sample data set comprises at least one set of sample heart rate data; a group of the sample heart rate data corresponds to a sample emotion label and a sample frequency type;
the emotion label prediction module is used for inputting the sample heart rate data into an initial identification model containing initial parameters to obtain a predicted emotion label corresponding to the sample heart rate data;
a loss value determining module, configured to determine, according to the sample emotion label and the predicted emotion label, an emotion recognition loss value corresponding to the initial recognition model;
and the model training module is used for carrying out iterative updating on the initial parameters of the initial recognition model according to the emotion recognition loss values corresponding to the sample heart rate data with the same sample frequency type to obtain a preset emotion recognition model corresponding to the sample frequency type.
Preferably, the emotion label prediction module includes:
the data dividing unit is used for carrying out data division on the sample heart rate data to obtain heart rate sub-data; associating one heart rate sub-data with one heart rate acquisition frame;
the first data selecting unit is used for sequentially splitting the heart rate sub-data into a preset selecting sequence according to the sequence of the heart rate collecting frames, and recording the heart rate sub-data which is sequenced first in the preset selecting sequence as the first sub-data;
a first hidden layer determining unit, configured to input the first sub-data into the initial identification model, so as to obtain a first hidden layer state corresponding to the first sub-data;
a second hidden layer determining unit, configured to record second ordered heartbeat data in the preset selection sequence as second sub-data, and input the second sub-data into the initial identification model, so as to determine a second hidden layer state according to the second sub-data and the first hidden layer state;
the end-point hidden layer determining unit is used for continuously and sequentially determining the data hidden layer states corresponding to the heart rate sub-data except the first sub-data and the second sub-data in the preset selection sequence according to the second hidden layer state, and recording the data hidden layer state corresponding to the heart rate sub-data sequenced at the last in the preset selection sequence as the end-point hidden layer state;
and the emotion label prediction unit is used for determining a predicted emotion label corresponding to the sample heart rate data according to the state of the endpoint hidden layer.
Preferably, the recommendation information prediction module 30 includes:
the identification acquisition unit is used for acquiring a user identification corresponding to the target user and acquiring user basic information corresponding to the user identification from a preset third party platform;
the information processing unit is used for inputting the user basic information into a preset portrait generation model so as to perform information processing on the user basic information through the preset portrait generation model to obtain a user portrait value;
a threshold determination unit for obtaining a historical user representation and determining a historical representation threshold from the historical user representation;
a portrait determination unit to determine the user personality portrait based on the user portrait value and the historical portrait threshold.
Preferably, the information pushing device further comprises:
the information receiving module is used for receiving recommendation evaluation information fed back by the target user aiming at the target recommendation information;
the first tag distance determining module is used for determining a feedback emotion tag of the target user according to the recommendation evaluation information and determining a first tag distance between the feedback emotion tag and the target emotion tag;
the tag acquisition module is used for acquiring a central emotion tag; the central emotion tag is obtained by performing emotion recognition on the target heart rate data through a central emotion recognition model in a central server; the central emotion recognition model corresponds to the data acquisition period;
a second tag distance determination module, configured to determine a second tag distance between the center emotion tag and the target emotion tag, and determine a feedback loss value according to the first tag distance and the second tag distance;
and the parameter adjusting module is used for adjusting the model parameters of the preset emotion recognition model according to the feedback loss value to obtain an adjusted emotion recognition model.
Preferably, the information pushing device further comprises:
the parameter acquisition module is used for acquiring adjustment model parameters of the adjustment emotion recognition model;
and the parameter sending module is used for sending the adjustment model parameters to the central server so as to enable the central server to carry out parameter adjustment on the central emotion recognition model according to the adjustment model parameters to obtain a new central emotion recognition model.
For specific limitations of the information pushing apparatus, reference may be made to the above limitations of the information pushing method, which is not described herein again. All or part of the modules in the information pushing device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data used in the information push method in the above embodiment. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an information push method.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the information push method in the above embodiments is implemented.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the information pushing method in the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and simplicity of description, the foregoing functional units and modules are merely illustrated in terms of division, and in practical applications, the foregoing functional allocation may be performed by different functional units and modules as needed, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above described functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An information pushing method, comprising:
acquiring target heart rate data of a target user, and determining a data acquisition period of the target heart rate data;
inputting the target heart rate data into a preset emotion recognition model so as to determine a target emotion label corresponding to the target user according to the target heart rate data through the preset emotion recognition model; the preset emotion recognition model corresponds to the data acquisition period;
acquiring a user personality portrait of the target user, and inputting the user personality portrait and the target emotion label into a preset recommendation model to obtain target recommendation information;
and pushing the target recommendation information to a client.
2. The information pushing method according to claim 1, wherein before inputting the target heart rate data into a preset emotion recognition model, the information pushing method further comprises:
determining the acquisition frequency type of the target heart rate data according to the data acquisition period, and sending the acquisition frequency type to a central server;
receiving target model parameters corresponding to the acquisition frequency type and determined by the central server;
and constructing the preset emotion recognition model based on the target model parameters.
3. The information pushing method according to claim 1, wherein before the inputting the target heart rate data into a preset emotion recognition model, further comprising:
acquiring a sample data set; the sample data set comprises at least one set of sample heart rate data; a set of the sample heart rate data corresponds to a sample emotion label and a sample frequency type;
inputting the sample heart rate data into an initial identification model containing initial parameters to obtain a predicted emotion label corresponding to the sample heart rate data;
determining an emotion recognition loss value corresponding to the initial recognition model according to the sample emotion label and the predicted emotion label;
and carrying out iterative updating on the initial parameters of the initial identification model according to the emotion identification loss values corresponding to the sample heart rate data with the same sample frequency type to obtain a preset emotion identification model corresponding to the sample frequency type.
4. The information push method of claim 3, wherein the inputting the sample heart rate data into an initial recognition model containing initial parameters to obtain a predicted emotion label corresponding to the sample heart rate data comprises:
carrying out data division on the sample heart rate data to obtain heart rate subdata; associating one heart rate sub data with one heart rate acquisition frame;
sequentially splitting the heart rate sub-data into a preset selection sequence according to the sequence of the heart rate acquisition frames, and recording the heart rate sub-data which is ranked first in the preset selection sequence as first sub-data;
inputting the first subdata into the initial identification model to obtain a first hidden layer state corresponding to the first subdata;
recording second ordered heart rate subdata in the preset selection sequence as second subdata, inputting the second subdata into the initial identification model, and determining a second hidden layer state according to the second subdata and the first hidden layer state;
continuously and sequentially determining the data hidden layer state corresponding to the heart rate subdata except the first subdata and the second subdata in the preset selection sequence according to the second hidden layer state, and recording the data hidden layer state corresponding to the heart rate subdata sequenced at the last in the preset selection sequence as the terminal hidden layer state;
and determining a predicted emotion label corresponding to the sample heart rate data according to the endpoint hidden layer state.
5. The information push method of claim 1, wherein the obtaining a user personality representation of the target user comprises:
acquiring a user identifier corresponding to the target user, and acquiring user basic information corresponding to the user identifier from a preset third-party platform;
inputting the user basic information into a preset portrait generating model, and performing information processing on the user basic information through the preset portrait generating model to obtain a user portrait value;
acquiring a historical user portrait, and determining a historical portrait threshold according to the historical user portrait;
and determining the user personality portrait according to the user portrait value and the historical portrait threshold value.
6. The information pushing method of claim 1, wherein after pushing the target recommendation information to the target user, further comprising:
receiving recommendation evaluation information fed back by the target user aiming at the target recommendation information;
determining a feedback emotion label of the target user according to the recommendation evaluation information, and determining a first label distance between the feedback emotion label and the target emotion label;
acquiring a central emotion label; the central emotion label is obtained by performing emotion recognition on the target heart rate data through a central emotion recognition model in a central server; the central emotion recognition model corresponds to the data acquisition period;
determining a second tag distance between the central emotion tag and the target emotion tag, and determining a feedback loss value according to the first tag distance and the second tag distance;
and adjusting the model parameters of the preset emotion recognition model according to the feedback loss value to obtain an adjusted emotion recognition model.
7. The information push method according to claim 6, wherein the adjusting the model parameters of the preset emotion recognition model according to the feedback loss value to obtain an adjusted emotion recognition model, further comprises:
obtaining an adjustment model parameter of the adjustment emotion recognition model;
and sending the adjustment model parameters to the central server so that the central server performs parameter adjustment on the central emotion recognition model according to the adjustment model parameters to obtain a new central emotion recognition model.
8. An information pushing apparatus, comprising:
the data acquisition module is used for acquiring target heart rate data of a target user and determining the data acquisition period of the target heart rate data;
the emotion portrait determining module is used for inputting the target heart rate data into a preset emotion recognition model so as to determine a target emotion label corresponding to the target user according to the target heart rate data through the preset emotion recognition model; the preset emotion recognition model corresponds to the data acquisition period;
the recommendation information prediction module is used for acquiring the user personality portrait of the target user and inputting the user personality portrait and the target emotion label into a preset recommendation model to obtain target recommendation information;
and the information pushing module is used for pushing the target recommendation information to a client.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the information pushing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the information push method according to any one of claims 1 to 7.
CN202210735771.XA 2022-06-27 2022-06-27 Information pushing method and device, computer equipment and storage medium Pending CN115238171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210735771.XA CN115238171A (en) 2022-06-27 2022-06-27 Information pushing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210735771.XA CN115238171A (en) 2022-06-27 2022-06-27 Information pushing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115238171A true CN115238171A (en) 2022-10-25

Family

ID=83671768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210735771.XA Pending CN115238171A (en) 2022-06-27 2022-06-27 Information pushing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115238171A (en)

Similar Documents

Publication Publication Date Title
CN109783730A (en) Products Show method, apparatus, computer equipment and storage medium
CN108876133A (en) Risk assessment processing method, device, server and medium based on business information
CN109360048A (en) Order generation method, system, computer equipment and storage medium
CN109582876B (en) Tourist industry user portrait construction method and device and computer equipment
WO2021203854A1 (en) User classification method and apparatus, computer device and storage medium
CN112035611B (en) Target user recommendation method, device, computer equipment and storage medium
CN110781379A (en) Information recommendation method and device, computer equipment and storage medium
CN109376237A (en) Prediction technique, device, computer equipment and the storage medium of client's stability
CN110751533A (en) Product portrait generation method and device, computer equipment and storage medium
WO2020244152A1 (en) Data pushing method and apparatus, computer device, and storage medium
CN110489622B (en) Sharing method and device of object information, computer equipment and storage medium
CN108182633B (en) Loan data processing method, loan data processing device, loan data processing program, and computer device and storage medium
CN109801101A (en) Label determines method, apparatus, computer equipment and storage medium
CN110163655A (en) Distribution method of attending a banquet, device, equipment and storage medium based on gradient boosted tree
CN112905876A (en) Information pushing method and device based on deep learning and computer equipment
CN112288279A (en) Business risk assessment method and device based on natural language processing and linear regression
CN112417315A (en) User portrait generation method, device, equipment and medium based on website registration
CN110457361B (en) Feature data acquisition method, device, computer equipment and storage medium
CN113641835B (en) Multimedia resource recommendation method and device, electronic equipment and medium
CN113704511B (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
CN110597951A (en) Text parsing method and device, computer equipment and storage medium
CN110442614A (en) Searching method and device, electronic equipment, the storage medium of metadata
CN115238171A (en) Information pushing method and device, computer equipment and storage medium
CN115860835A (en) Advertisement recommendation method, device and equipment based on artificial intelligence and storage medium
CN115587844A (en) Method, device, equipment and storage medium for controlling user conversion data return

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination