CN111695629A - User characteristic obtaining method and device, computer equipment and storage medium - Google Patents

User characteristic obtaining method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111695629A
CN111695629A CN202010530924.8A CN202010530924A CN111695629A CN 111695629 A CN111695629 A CN 111695629A CN 202010530924 A CN202010530924 A CN 202010530924A CN 111695629 A CN111695629 A CN 111695629A
Authority
CN
China
Prior art keywords
user
characteristic
feature
extraction model
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010530924.8A
Other languages
Chinese (zh)
Inventor
符芳诚
余乐乐
陶阳宇
崔斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Tencent Technology Shenzhen Co Ltd
Original Assignee
Peking University
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Tencent Technology Shenzhen Co Ltd filed Critical Peking University
Priority to CN202010530924.8A priority Critical patent/CN111695629A/en
Publication of CN111695629A publication Critical patent/CN111695629A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The embodiment of the application discloses a user characteristic obtaining method and device, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: calling a first feature extraction model, performing feature extraction on first user information of the stored target user identification to obtain first user features, receiving second user features sent by first equipment, and obtaining first combined user features of the target user identification according to the first user features and the second user features. The first device provides the user characteristics extracted by the first device for the second device of the home terminal, and original user information stored by the first device does not need to be provided, so that the leakage of the user information is avoided. And the second equipment combines the user characteristics extracted by the second equipment and the first equipment to obtain combined characteristics, and the combined characteristics comprise the characteristics in the user information stored by the second equipment and the first equipment, so that the information quantity of the user characteristics is enriched, and the accuracy of combining the user characteristics is improved.

Description

User characteristic obtaining method and device, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a user characteristic obtaining method and device, computer equipment and a storage medium.
Background
With the development of computer technology, user information becomes more and more complex and diversified, and in order to accurately describe a user, user characteristics can be generally obtained according to the user information, and the user can be described by the user characteristics.
The related art provides a user feature obtaining method, which obtains user information of a target user, and invokes a feature extraction model to process the user information to obtain user features of the target user. The method has the advantage that the accuracy of the obtained user characteristics is poor due to the fact that the amount of the user information adopted in the method is small.
Disclosure of Invention
The embodiment of the application provides a user characteristic obtaining method and device, computer equipment and a storage medium, and can improve the accuracy of obtained combined user characteristics. The technical scheme is as follows:
in one aspect, a method for obtaining user characteristics is provided, where the method includes:
calling a first feature extraction model, and performing feature extraction on first user information of the stored target user identification to obtain first user features;
receiving a second user characteristic sent by first equipment, wherein the second user characteristic is obtained by calling a second characteristic extraction model by the first equipment and processing second user information of the stored target user identifier, and the first characteristic extraction model and the second characteristic extraction model are different models for extracting the user characteristic;
and acquiring a first combined user characteristic of the target user identifier according to the first user characteristic and the second user characteristic.
In one possible implementation, the method further includes:
encrypting the first weight according to the first public key to obtain a second weight;
sending the second weight to the first equipment, wherein the first equipment is used for acquiring a second adjusting parameter according to the second weight and the second sample user information;
receiving the second adjustment parameter sent by the first device;
decrypting the second adjustment parameter according to a first private key corresponding to the first public key to obtain a third adjustment parameter;
and sending the third adjustment parameter to the first equipment, wherein the first equipment is used for adjusting the fifth feature extraction model according to the third adjustment parameter.
In another possible implementation manner, the first device is configured to obtain a fourth adjustment parameter according to the second weight and the second sample user information, and perform fusion processing on the fourth adjustment parameter and a fifth noise feature to obtain the second adjustment parameter;
and the first equipment is used for carrying out fusion processing on the third adjustment parameter and a sixth noise feature, and adjusting the fifth feature extraction model according to the fused adjustment parameter, wherein the sixth noise feature is opposite to the fifth noise feature.
In another possible implementation, the method further includes:
obtaining a loss value of the first feature extraction model according to the predicted user label and the sample user label;
and stopping training the first feature extraction model in response to the loss value not being greater than a preset threshold value.
In another possible implementation, the method further includes;
and sending a training stopping notification to the first equipment in response to the loss value not being greater than a preset threshold, wherein the first equipment is used for stopping training a fifth feature extraction model according to the training stopping notification.
In another aspect, an apparatus for obtaining user characteristics is provided, the apparatus including:
the characteristic extraction module is used for calling a first characteristic extraction model and extracting the characteristics of the first user information of the stored target user identification to obtain first user characteristics;
the characteristic receiving module is used for receiving a second user characteristic sent by first equipment, the second user characteristic is obtained by calling a second characteristic extraction model by the first equipment and processing second user information of the stored target user identifier, and the first characteristic extraction model and the second characteristic extraction model are different models for extracting the user characteristic;
and the combined feature acquisition module is used for acquiring the first combined user feature of the target user identifier according to the first user feature and the second user feature.
In a possible implementation manner, the second feature extraction model is a model that is encrypted according to the first public key, and the combined feature obtaining module includes:
the decryption processing unit is used for decrypting the second user characteristic according to a first private key corresponding to the first public key to obtain a decrypted user characteristic;
and the first combination processing unit is used for carrying out combination processing on the first user characteristic and the decrypted user characteristic to obtain the first combination user characteristic.
In another possible implementation manner, the apparatus further includes:
the encryption processing module is used for encrypting a third feature extraction model according to the first public key to obtain a second feature extraction model;
and the model sending module is used for sending the second feature extraction model to the first equipment.
In another possible implementation manner, the apparatus further includes:
the information processing module is used for calling a fourth feature extraction model and processing the first user information to obtain a third user feature;
the feature sending module is used for sending the third user feature to the first device, the first device is used for obtaining a second combined user feature according to a fourth user feature and the third user feature, and the fourth user feature is obtained by calling a fifth feature extraction model from the first device to perform feature extraction on the second user information;
the feature receiving module is configured to receive the second combined user feature sent by the first device.
In another possible implementation manner, before the invoking a fourth feature extraction model to perform feature extraction on the first user information and obtain a third user feature, the apparatus further includes:
and the model receiving module is used for receiving the fourth feature extraction model sent by the first equipment.
In another possible implementation manner, the first device is configured to encrypt a sixth feature extraction model according to a second public key to obtain the fourth feature extraction model;
the first device is used for decrypting the third user characteristic according to a second private key corresponding to the second public key to obtain a decrypted user characteristic; and combining the fourth user characteristic and the decrypted user characteristic to obtain the second combined user characteristic.
In another possible implementation manner, the second user characteristic is obtained by the first device by fusing a fifth user characteristic and the first noise characteristic, and the fifth user characteristic is obtained by the first device invoking the second characteristic extraction model to perform characteristic extraction on the second user information;
the second combined user feature is obtained by the first device by fusing a third combined user feature and a second noise feature, the third combined user feature is obtained by the first device by combining the fourth user feature and the third user feature, and the first noise feature is opposite to the second noise feature;
the device further comprises:
and the combination processing module is used for carrying out combination processing on the first combination user characteristic and the second combination user characteristic to obtain a fourth combination user characteristic.
In another possible implementation manner, the information processing module includes:
the feature extraction unit is used for calling the fourth feature extraction model and extracting features of the first user information to obtain sixth user features;
the first fusion processing unit is used for performing fusion processing on the sixth user characteristic and a third noise characteristic to obtain a third user characteristic;
the combined feature obtaining module includes:
the second combination processing unit is used for carrying out combination processing on the first user characteristic and the second user characteristic to obtain a fifth combination user characteristic;
and a second fusion processing unit, configured to perform fusion processing on the fifth combination user characteristic and a fourth noise characteristic to obtain the first combination user characteristic, where the third noise characteristic is opposite to the fourth noise characteristic.
In another possible implementation manner, the apparatus further includes:
the sample acquisition module is used for acquiring first sample user information;
the feature extraction module is further configured to invoke the first feature extraction model, perform feature extraction on the first sample user information, and obtain a first sample user feature;
the feature receiving module is further configured to receive a second sample user feature, where the second sample user feature is obtained by calling, by the first device, the second feature extraction model and processing second sample user information, where the first sample user information and the second sample user information belong to the same sample user identifier;
the combined feature obtaining module is further configured to obtain a first sample combined user feature according to the first sample user feature and the second sample user feature;
and the model training module is used for training the first feature extraction model according to the first sample combination user features and the first sample user information.
In another possible implementation manner, the apparatus further includes:
the information processing module is further used for calling a fourth feature extraction model, processing the first sample user information and obtaining a third sample user feature;
the feature sending module is further configured to send the third sample user feature to the first device, where the first device is configured to obtain a second sample combined user feature according to a fourth sample user feature and the third sample user feature, and the fourth sample user feature is obtained by calling, by the first device, a fifth feature extraction model to perform feature extraction on the second sample user information;
the characteristic receiving module is further used for receiving the second sample combination user characteristics sent by the first equipment;
the model training module comprises:
and the model training unit is used for training the first feature extraction model according to the first sample combination user feature, the second sample combination user feature and the first sample user information.
In another possible implementation manner, the model training unit is configured to obtain a predicted user label of the sample user identifier according to the first sample combined user feature and the second sample combined user feature; determining a difference between the predicted user label and a sample user label corresponding to the first sample user information as a first weight; acquiring a first adjustment parameter of the first feature extraction model according to the first weight and the first sample user information; and adjusting the first feature extraction model according to the first adjustment parameter.
In another possible implementation manner, the apparatus further includes:
the weight encryption module is used for encrypting the first weight according to the first public key to obtain a second weight;
the weight sending module is used for sending the second weight to the first equipment, and the first equipment is used for obtaining a second adjusting parameter according to the second weight and the second sample user information;
a parameter receiving module, configured to receive the second adjustment parameter sent by the first device;
the parameter sending module is used for decrypting the second adjustment parameter according to a first private key corresponding to the first public key to obtain a third adjustment parameter;
and the model adjusting module is used for sending the third adjusting parameter to the first equipment, and the first equipment is used for adjusting the fifth feature extraction model according to the third adjusting parameter.
In another possible implementation manner, the first device is configured to obtain a fourth adjustment parameter according to the second weight and the second sample user information, and perform fusion processing on the fourth adjustment parameter and a fifth noise feature to obtain the second adjustment parameter;
and the first equipment is used for carrying out fusion processing on the third adjustment parameter and a sixth noise feature, and adjusting the fifth feature extraction model according to the fused adjustment parameter, wherein the sixth noise feature is opposite to the fifth noise feature.
In another possible implementation manner, the apparatus further includes:
a loss value obtaining module, configured to obtain a loss value of the first feature extraction model according to the predicted user tag and the sample user tag;
the model training unit is further configured to stop training the first feature extraction model in response to the loss value not being greater than a preset threshold value.
In another possible implementation, the apparatus further includes;
and the notification sending module is used for sending a training stopping notification to the first equipment in response to the loss value not being greater than a preset threshold value, and the first equipment is used for stopping training the fifth feature extraction model according to the training stopping notification.
In another aspect, a method for obtaining user characteristics is provided, where the method includes:
the second equipment calls a first feature extraction model to extract features of the first user information of the stored target user identification to obtain first user features;
the first equipment calls a second feature extraction model, processes second user information of the target user identification which is stored to obtain second user features, and sends the second user features to the second equipment, wherein the first feature extraction model and the second feature extraction model are different models for extracting the user features;
the second device receiving the second user characteristic; and acquiring a first combined user characteristic of the target user identifier according to the first user characteristic and the second user characteristic.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the user feature obtaining method according to the above aspect.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the user feature obtaining method according to the above aspect.
In yet another aspect, a computer program product is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
according to the method, the device, the computer equipment and the storage medium provided by the embodiment of the application, the model for extracting the user characteristics is split and is respectively stored in the second equipment and the first equipment at the home terminal, the second equipment and the first equipment respectively call the stored characteristic extraction models, the user information of the target user identification stored respectively is subjected to characteristic extraction, the first equipment provides the user characteristics extracted by the first equipment for the second equipment, the original user information stored by the first equipment does not need to be provided, and the leakage of the user information is avoided. And the second equipment combines the user characteristics extracted by the second equipment and the first equipment to obtain combined characteristics, and the combined characteristics comprise the characteristics in the user information stored by the second equipment and the first equipment, so that the information quantity of the user characteristics is enriched, and the accuracy of combining the user characteristics is improved. Compared with the scheme that different user information is stored in a central device and the central device performs feature extraction, the embodiment of the application stores the user information in different devices respectively and performs feature extraction respectively, so that a decentralized feature extraction mode is realized, information leakage caused by the fact that the central device stores the user information is avoided, and the safety of the user information is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a feature extraction system provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a feature extraction model distribution provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a feature extraction model distribution provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a feature extraction model distribution provided by an embodiment of the present application;
fig. 5 is a flowchart of a user characteristic obtaining method according to an embodiment of the present application;
fig. 6 is a flowchart of a user characteristic obtaining method according to an embodiment of the present application;
FIG. 7 is a flowchart of a feature extraction model training method provided in an embodiment of the present application;
fig. 8 is a flowchart of a user characteristic obtaining method according to an embodiment of the present application;
FIG. 9 is a flowchart of a feature extraction model training method provided in an embodiment of the present application;
fig. 10 is a flowchart of a user characteristic obtaining method according to an embodiment of the present application;
FIG. 11 is a flowchart of a feature extraction model training method provided in an embodiment of the present application;
fig. 12 is a flowchart of a user characteristic obtaining method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a user feature obtaining apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a user feature obtaining apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
As used herein, the terms "first," "second," third, "fourth," fifth, "sixth," and the like may be used herein to describe various concepts, but these concepts are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first user characteristic may be referred to as a user characteristic, and similarly, a second user characteristic may be referred to as a first user characteristic, without departing from the scope of the present application.
As used herein, the terms "at least one," "a plurality," "each," and "any," at least one of which includes one, two, or more than two, and a plurality of which includes two or more than two, each of which refers to each of the corresponding plurality, and any of which refers to any of the plurality. For example, the plurality of elements includes 3 elements, each of which refers to each of the 3 elements, and any one of the 3 elements refers to any one of the 3 elements, which may be a first one, a second one, or a third one.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
According to the scheme provided by the embodiment of the application, the feature extraction model can be trained based on the machine learning technology of artificial intelligence, and the user feature acquisition method is realized by utilizing the trained feature extraction model.
In the artificial intelligence era, the acquisition of machine learning, particularly deep learning models, requires a large amount of training data as a premise. In many business scenarios, however, the training data for the model is often scattered across different business teams, departments, and even different companies. Due to user privacy, these data cannot be used directly, forming so-called "data islands". In recent two years, the federal Learning technology (Federal Learning) is rapidly developed, a new solution is provided for cross-team data cooperation and breaking of data islands, and a landing stage of advancing from theoretical research to batch application is started.
One of the core differences between federal learning and the general machine learning task is that the training participants change from one party to two or even more parties. The federal study completes the model training task by participating in the same model training task together with multiple parties on the premise of not ex-warehouse data and protecting data privacy, and breaks a data isolated island. One core problem is therefore: how to coordinate two or more parties to complete a model training task together. This coordinated approach we call the federal algorithm protocol. When two or more parties participate in the training task together, each party operates according to a preset algorithm protocol, so that the correct operation of the algorithm is ensured.
The user feature obtaining method provided in the embodiment of the present application may be used in a computer device, where the computer device may be a terminal or a server, the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), and a big data and artificial intelligence platform. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
Fig. 1 is a schematic structural diagram of a feature extraction system provided in an embodiment of the present application, and as shown in fig. 1, the system includes a first device 101 and a second device 102, where the first device 101 may be a terminal or a server, and the second device 102 may be a terminal or a server.
The second device 102 calls the first feature extraction model to perform feature extraction on the first user information of the stored target user identifier to obtain a first user feature, the first device 101 calls the second feature extraction model to process the second user information of the stored target user identifier to obtain a second user feature, the second user feature is sent to the second device 102, the second device 102 receives the second user feature sent by the first device 101, and the first combined user feature of the target user identifier is obtained according to the first user feature and the second user feature.
The method provided by the embodiment of the application can be used for various scenes.
For example, in an item recommendation scenario:
after the terminal determines the target user identifier, the user feature obtaining method provided by the embodiment of the application is adopted to obtain the first combined user feature of the target user identifier, the user label of the target user identifier can be subsequently determined, the article matched with the target user identifier can be determined through the user label of the target user identifier, and the matched article is recommended to the target user, so that the article which is in line with the preference of the target user can be recommended to the target user.
For another example, in a friend recommendation scenario:
after the terminal determines the target user identifier, the user feature obtaining method provided by the embodiment of the application is adopted to obtain the first combined user feature of the target user identifier, the user label of the target user identifier can be determined subsequently, and other user identifiers matched with the user label are recommended to the target user according to the user label of the target user identifier, so that friends similar to the favorite of the target user can be recommended to the target user.
As another example, in a risk rating assessment scenario:
after the management terminal determines the target user identifier, the user feature obtaining method provided by the embodiment of the application is adopted, the first combined user feature of the target user identifier is obtained through the user information of the target user identifier in the multi-party equipment, the target user identifier is evaluated according to the first combined user feature of the target user identifier, the risk level of the target user identifier is obtained, the payment overdue risk of the target user can be determined according to the risk level, or the fund use limit and the like can be set for the target user.
Before describing the method provided by the embodiment of the present application in detail, the following explanation is first made for a plurality of feature extraction models related to the embodiment of the present application:
as shown in fig. 2, the second device includes a first feature extraction model, and the first device includes a second feature extraction model.
The first feature extraction model and the second feature extraction model are both models for extracting features of user information, and the functions of the first feature extraction model and the second feature extraction model are similar. The difference is that the two models store different devices and the user information stored on the devices may also be different, which results in that the user characteristics extracted by the two models according to the user information on the respective devices may also be different. For example, the first feature extraction model is used for feature extraction of user information stored in the second device, the second feature extraction model is used for feature extraction of user information stored in the first device, user features of user information stored in different devices of the same user identifier can be extracted through the first feature extraction model and the second feature extraction model, and the user features extracted by the two models can be combined to obtain combined user features.
Further, as shown in fig. 3, the second device may further include a fourth feature extraction model, and the first device may further include a fifth feature extraction model.
The fourth feature extraction model and the fifth feature extraction model are both models for extracting features of the user information, and both the fourth feature extraction model and the fifth feature extraction model have similar functions to the first feature extraction model and the second feature extraction model.
The difference is that the fourth feature extraction model and the fifth feature extraction model store different devices, and the user information stored on the devices may also be different, which results in that the user features extracted by the two models according to the user information on the respective devices may also be different.
The fourth feature extraction model is used for extracting features of the user information stored in the second device, the fifth feature extraction model is used for extracting features of the user information stored in the first device, the user features of the user information stored in different devices of the same user identifier can be extracted through the fourth feature extraction model and the fifth feature extraction model, and the user features extracted by the two models can be combined to obtain combined user features.
And the first feature extraction model is provided by the second device, the fourth feature extraction model is provided by the first device to the second device, and the two models can both extract features of the user information stored on the second device, so that the two models can extract different user features according to the same user information.
Similarly, the fifth feature extraction model is provided by the first device itself, and the second feature extraction model is provided by the second device to the first device, and both models can perform feature extraction on the user information stored on the first device, and since the two models may be different, the two models may extract different user features according to the same user information.
Further, as shown in fig. 4, the second device further includes a third feature extraction model, and the first device further includes a sixth feature extraction model.
The third feature extraction model and the sixth feature extraction model are both models for extracting features of the user information, and both the third feature extraction model and the sixth feature extraction model have similar functions to the first feature extraction model and the second feature extraction model.
The third feature extraction model is stored in the second device and is an original model of the second feature extraction model, namely the second device encrypts the third feature extraction model to obtain the second feature extraction model, and provides the second feature extraction model for the first device.
The sixth feature extraction model is stored in the first device and is an original model of the fourth feature extraction model, that is, the first device encrypts the sixth feature extraction model to obtain the fourth feature extraction model, and provides the fourth feature extraction model to the second device.
Fig. 5 is a flowchart of a user feature obtaining method provided in an embodiment of the present application, and is applied to a second device, as shown in fig. 5, the method includes:
501. and the second equipment calls the first feature extraction model to extract the features of the first user information of the stored target user identification to obtain the first user features.
The first feature extraction model is used for extracting a model of the user feature, and the first feature extraction model is stored in the second device.
The target user identification is used for representing the unique identification of the target user, and the target user identification can be an identity card number, a mobile phone number, a user account, a user nickname and the like. The first user information identifies, for the target user, user information of a corresponding target user, and the first user information may include user information of multiple dimensions, for example, the first user information includes an age, an occupation, a wage, a height, and the like of the user. The first user feature belongs to the user features of the target user identification, and the first user feature can be represented by a vector or a matrix.
In the embodiment of the application, the target user identifier and the first user information are stored in the second device correspondingly, and the first user information stored in the second device can be determined through the target user identifier. Therefore, the second device may invoke the first feature extraction model stored locally, perform feature extraction on the first user information stored locally, and obtain the first user feature, where the first user feature may describe the target user and may reflect the preference of the target user.
502. The second device receives the second user characteristic sent by the first device.
In this embodiment of the present application, a first device and a second device belong to different devices, the first device and the second device include user information with the same user identifier, a communication connection is established between the first device and the second device, and interaction can be performed between the first device and the second device through the established communication connection. Therefore, in this embodiment of the application, through the established communication connection, the first device may send the second user feature to the second device, and the second device receives the second user feature.
And the second user characteristic is obtained by calling a second characteristic extraction model by the first equipment and processing the second user information of the stored target user identification. The first feature extraction model and the second feature extraction model are different models for extracting the features of the user.
The second user information identifies, for the target user, user information of the corresponding target user, and the second user information may include user information of multiple dimensions, for example, the second user information includes consumption records, wages, ages, and the like of the user. The second user characteristic belongs to the user characteristic of the target user identification, and the second user characteristic can be represented by a vector or a matrix.
In the embodiment of the application, the second user information and the target user identifier are correspondingly stored in the first device, and the first device can determine the second user information stored in the first device through the target user identifier. Therefore, the first device may invoke the locally stored second feature extraction model, perform feature extraction on the locally stored second user information, and obtain a second user feature, where the second user feature may describe the target user and may reflect the preference of the target user.
The second user information and the first user information both belong to user information of a target user, and the first user information and the second user information may include user information of the same dimension or at least one user information of different dimensions.
For example, the first device is a store terminal, the second device is an online shopping server, and the store terminal stores a consumption record of the user, and the user information generated according to the consumption record of the user may include: the online shopping system comprises a bank account, a consumption amount, consumption time, a name of a purchased article and the like, wherein an online shopping record of a user is stored in an online shopping server, and user information which can be generated can comprise: the user information in the shop terminal and the user information in the online shopping server comprise user information with the same dimensionality; the user information of the shop terminal may include: the bank account number, the consumption amount, the consumption time, the name of the purchased article, and the user information in the online shopping server may further include: the account number of the bank, the consumption amount, the consumption time, the name of the purchased article, the account number of the user and the address of the user, and the user information in the shop terminal and the user information in the online shopping server comprise user information with at least one different dimension.
503. And the second equipment acquires the first combined user characteristic of the target user identifier according to the first user characteristic and the second user characteristic.
Because the first user characteristic and the second user characteristic both belong to the characteristic of the target user identifier, the second device can acquire the first combined user characteristic of the target user identifier through the first user characteristic and the second user characteristic, and the first combined user characteristic comprises the first user characteristic and the second user characteristic.
According to the method provided by the embodiment of the application, the model for extracting the user characteristics is split and is respectively stored in the second device and the first device at the home terminal, the stored characteristic extraction models are respectively called by the second device and the first device, the user information of the target user identification stored by the second device and the first device is subjected to characteristic extraction, the first device provides the user characteristics extracted by the first device for the second device, the original user information stored by the first device does not need to be provided, and leakage of the user information is avoided. And the second equipment combines the user characteristics extracted by the second equipment and the first equipment to obtain combined characteristics, and the combined characteristics comprise the characteristics in the user information stored by the second equipment and the first equipment, so that the information quantity of the user characteristics is enriched, and the accuracy of combining the user characteristics is improved. Compared with the scheme that different user information is stored in a central device and the central device performs feature extraction, the embodiment of the application stores the user information in different devices respectively and performs feature extraction respectively, so that a decentralized feature extraction mode is realized, information leakage caused by the fact that the central device stores the user information is avoided, and the safety of the user information is improved.
On the basis of the embodiment shown in fig. 5, the second device may further receive the second combined user characteristic sent by the first device, and obtain the user tag of the target user identifier according to the first combined user characteristic and the second combined user characteristic, which is described in detail in the following embodiments.
Fig. 6 is a flowchart of a user feature obtaining method provided in an embodiment of the present application, and is applied to a first device and a second device, as shown in fig. 6, the method includes:
601. the first device sends the fourth feature extraction model to the second device.
In the embodiment of the present application, the second device stores a first feature extraction model and a second feature extraction model, the first device stores a fifth feature extraction model and a fourth feature extraction model, and the first feature extraction model, the second feature extraction model, the fourth feature extraction model, and the fifth feature extraction model are different from each other.
The first device and the second device both store user information of a target user identifier, and in order to improve accuracy of a user tag of the target user identifier obtained by the second device, a combined user characteristic of the target user identifier is obtained through the first user information in the second device and the second user information in the first device, so that the user tag of the target user identifier is obtained. In the process of obtaining the combined user feature of the target user identifier, in order to ensure that the third user feature sent to the first device by the second device is matched with the first device and the second user feature sent to the second device by the first device is matched with the second device, therefore, before the third user feature and the second user feature are obtained, the first device is required to send the fourth feature extraction model to the second device, and the second device sends the second feature extraction model to the first device, so that the first device and the second device can call the feature extraction model sent by the other device, feature extraction is performed on user information in the own device, and the user feature matched with the other device is obtained.
The fourth feature extraction model is a model for obtaining features of the user, and the fourth feature extraction model is generated by the first device.
In this embodiment of the present application, a first device and a second device belong to different devices, the first device and the second device include user information with the same user identifier, a communication connection is established between the first device and the second device, and interaction can be performed between the first device and the second device through the established communication connection. The first device may send the fourth feature extraction model to the second device over the established communication connection between the first device and the second device.
Since the first device and the second device both include the user information of the target user identifier, in order to perform feature extraction through the user information of the target user identifier stored in the first device and the second device when obtaining the user tag of the target user identifier, the first device sends the fourth feature extraction model to the second device, so that the subsequent second device can call the fourth feature extraction model to obtain the user feature of the target user identifier.
602. And the second equipment receives the fourth feature extraction model sent by the first equipment.
The second device receives a fourth feature extraction model sent by the first device, stores the fourth feature extraction model in the second device, and subsequently calls the fourth feature extraction model to perform feature extraction on the first user information in the second device to acquire user features matched with the first device.
603. The second device sends the second feature extraction model to the first device.
The second feature extraction model is a model for obtaining the features of the user, and the second feature extraction model is generated by the second device. And the second equipment sends the second feature extraction model to the first equipment through the communication connection established between the first equipment and the second equipment.
604. And the first equipment receives the second feature extraction model sent by the second equipment.
The first device receives a second feature extraction model sent by the second device, stores the second feature extraction model in the first device, and subsequently calls the second feature extraction model to perform feature extraction on second user information in the first device to acquire user features matched with the first device.
And the fourth feature extraction model is sent to the second equipment through the first equipment, and the second equipment sends the second feature extraction model to the first equipment, so that the subsequent first equipment and the subsequent second equipment can call the feature extraction model sent by the opposite equipment, feature extraction is carried out on the user information in the own equipment, the user feature matched with the opposite equipment is obtained, and after the user feature matched with the opposite equipment is obtained, the user feature matched with the opposite equipment is sent to the opposite equipment, so that the opposite equipment can process the received user feature, and the mode of jointly using the user information in the multi-party equipment is realized.
605. And the second equipment calls the first feature extraction model to extract the features of the first user information of the stored target user identification to obtain the first user features.
The first feature extraction model is a model used for extracting user features from the first user information, and the first feature extraction model is stored in the second device.
In one possible implementation, the first feature extraction model is a first feature extraction model matrix, and the first user information is a first user information matrix; then step 605 may include: and taking the product of the first feature extraction model matrix and the first user information as the first user feature.
For example, the first feature extraction model matrix is W1The first user information is X1Then the first user characteristic may be W1×X1
606. And the first equipment calls a second feature extraction model, processes the second user information of the stored target user identification to obtain second user features, and sends the second user features to the second equipment.
The second feature extraction model is used for extracting a user feature model from the second user information. Since the second feature extraction model, the fifth feature extraction model, the first feature extraction model, and the fourth feature extraction model are all different, the second user feature obtained by calling the second feature extraction model is different from the first user feature, the third user feature, and the fourth user feature.
In one possible implementation manner, the second feature extraction model is a second feature extraction model matrix, and the second user information is a second user information matrix; then this step 606 may include: and taking the product of the second feature extraction model matrix and the second user information as the second user feature.
For example, the second feature extraction model matrix is R1The second user information is X2Then the second user characteristic may be R1×X2
The second user characteristics acquired by the first equipment calling the second characteristic extraction model are matched with the second equipment because the second characteristic extraction model is sent by the second equipment, and the second user characteristics can be used by the second equipment, so that the second user characteristics are sent to the second equipment through the communication connection between the first equipment and the second equipment, and the second equipment can process the second user characteristics subsequently.
607. And the second equipment receives the second user characteristic sent by the first equipment, and acquires the first combined user characteristic of the target user identifier according to the first user characteristic and the second user characteristic.
The second device receives the second user characteristic sent by the first device, and the second user characteristic is obtained by processing the second user information by calling a second characteristic extraction model by the first device, so that the second user characteristic is matched with the second device, and the second device can process the first user characteristic and the second user characteristic, thereby obtaining the first combined user characteristic.
In one possible implementation, this step 607 may include: and the second equipment combines the first user characteristic and the second user characteristic to obtain a first combined user characteristic. For example, the first user characteristic is W1×X1The second user characteristic is R1×X2Then the first combined user characteristic is W1×X1+R1×X2
The second device sends the second feature extraction model to the first device, the first device calls the second feature extraction model, feature extraction is carried out on the second user information, the second user feature is obtained, the first device does not need to send the second user information to the second device, user information leakage is avoided, safety is improved, the obtained second user feature is matched with the second device, and the second device can directly process the received second user feature. In addition, the first combined user characteristics acquired by the second device include first user characteristics corresponding to the first user information and second user characteristics corresponding to the second user information, so that the user characteristics are more sufficient, the accuracy of the first combined user characteristics is improved, and the subsequently acquired user labels are more accurate.
608. And the second equipment calls the fourth feature extraction model, processes the first user information to obtain a third user feature, and sends the third user feature to the first equipment.
Wherein the fourth feature extraction model is different from the first feature extraction model. The third user characteristic is a user characteristic of the target user identification. And processing the first user information by calling the fourth feature extraction model because the fourth feature extraction model is different from the first feature extraction model, so that the obtained third user feature is different from the first user feature.
In one possible implementation manner, the fourth feature extraction model is a fourth feature extraction model matrix, and the first user information is a first user information matrix; then step 608 may include: and taking the product of the fourth feature extraction model matrix and the first user information as the third user feature.
For example, the fourth feature extraction model matrix is R2The first user information is X1Then the third user characteristic may be R2×X1
The third user characteristics acquired by the second device calling the fourth feature extraction model are matched with the first device due to the fact that the fourth feature extraction model is sent by the first device, the third user characteristics can be used by the first device, and after the third user characteristics are acquired by the second device, the third user characteristics are sent to the first device through communication connection between the first device and the second device, so that the third user characteristics can be processed subsequently by the first device.
609. The first device receives the third user characteristic transmitted by the second device.
And after receiving the third user characteristic, the first device stores the third user characteristic so as to obtain a second combined user characteristic through the third user characteristic later.
610. And calling a fifth feature extraction model by the first equipment to perform feature extraction on the second user information to obtain a fourth user feature.
And the fifth feature extraction model is used for extracting the user features from the second user information, and the fifth feature extraction model is stored in the second equipment. Since the fifth feature extraction model, the first feature extraction model, and the fourth feature extraction model are all different, the fourth user feature, the first user feature, and the third user feature obtained by calling the fifth feature extraction model are all different.
In this embodiment of the application, the first user information and the second user information may include user information with the same dimension, or may include user information with at least one different dimension, that is, the first user information is different from the second user information. The user information in the first device and the second device is generated through interaction with the user, and as the first device is different from the second device, the first user information stored in the first device may be different from the second user information stored in the second device for the target user identifier.
For example, the first device is a storage device of a store in the target area, the second device is a server of a bank in the target area, the bank is used for providing a service for storing assets for the user, the first device generates first user information through a consumption record of an article purchased in the store by the user, the second device generates second user information through a storage service in the bank by the user, and the first user information and the second user information may include the same information or different information.
In one possible implementation manner, the fifth feature extraction model is a fifth feature extraction model matrix, and the second user information is a second user information matrix; then step 610 may include: and taking the product of the fifth feature extraction model matrix and the second user information as the fourth user feature.
For example, the fifth feature extraction model matrix is W2The second user information is X2Then the fourth user characteristic may be W2×X2
611. And the first equipment acquires the second combined user characteristic of the target user identification according to the fourth user characteristic and the third user characteristic, and sends the second combined user characteristic to the second equipment.
Since the fourth user characteristic is obtained through the second user information and the third user characteristic is obtained through the first user information, the first device blends user characteristics of different user information into the obtained second combined user characteristic through the fourth user characteristic and the third user characteristic, and accuracy of the obtained second combined user characteristic is improved. And through the communication connection between the first device and the second device, the first device sends the second combined user characteristic to the second device, so that the second device can subsequently acquire the user label through the second combined user characteristic.
In one possible implementation, this step 611 may include: and the second equipment combines the fourth user characteristic and the third user characteristic to obtain a second combined user characteristic. For example, the fourth user characteristic is W2×X2The third user characteristic is R2×X1Then the second combined user characteristic is W2×X2+R2×X1
The first device sends the fourth feature extraction model to the second device, the second device calls the fourth feature extraction model, feature extraction is conducted on the first user information, the third user feature is obtained, the second device is not needed to send the first user information to the first device, user information leakage is avoided, safety is improved, the obtained third user feature is matched with the first device, and the first device can directly process the received third user feature. Moreover, the second combined user characteristics acquired by the first device include the third user characteristics corresponding to the first user information and the fourth user characteristics corresponding to the second user information, so that the user characteristics are more sufficient, the accuracy of the second combined user characteristics is improved, and the subsequently acquired user labels are more accurate.
612. And the second equipment receives the second combined user characteristic sent by the first equipment, and acquires the user label of the target user identification according to the first combined user characteristic and the second combined user characteristic.
The user tag is used to indicate a preference of the user, for example, the user tag may be a high-consumption user, a low-consumption user, a tourist user, or the like.
Because the first feature extraction model, the second feature extraction model, the fourth feature extraction model and the fifth feature extraction model are different from each other, the user features after feature extraction of the user information through the first feature extraction model, the second feature extraction model, the fourth feature extraction model and the fifth feature extraction model are also different from each other, and the first combined user feature and the second combined user feature respectively comprise user features corresponding to different user information through interaction between the first device and the second device, so that the user features are more sufficient.
In one possible implementation, this step 612 may include: and the second equipment combines the first combined user characteristic and the second combined user characteristic, and acquires the user label of the target user identification according to the combined user characteristic.
It should be noted that, in the embodiment of the present application, the second device obtains the user tag of the target user identifier according to the first combination user feature and the second combination user feature for description, and in another embodiment, the steps 601 and 608 and 612 do not need to be executed, and the second device may obtain the first combination user feature, and the subsequent second device may store the first combination user feature.
It should be noted that, in the embodiment of the present application, the description is only performed by using the user tag for acquiring the target user identifier by the second device, and the obtaining of the user tag for the target user identifier by the first device may include the following two ways:
the first mode is as follows: after the second device obtains the user tag of the target user identifier, the second device sends the user tag of the target user identifier to the first device, and the first device receives the user tag of the target user identifier sent by the second device.
The second mode is as follows: after the second device obtains the first combined user feature, the second device sends the first combined user feature to the first device, the first device receives the first combined user feature sent by the second device, and the user tag of the target user identifier is obtained according to the first combined user feature and the second combined user feature.
It should be noted that, in the embodiment of the present application, the second device acquires the user tag of the target user identifier, but in another embodiment, the step of the first device sending the second combined user feature to the second device and the step 612 need not be executed in 611, the second device may be executed to send the first combined user feature to the first device, and the first device acquires the user tag of the target user identifier according to the first combined user feature and the second combined user feature.
It should be noted that, the user tag for obtaining the target user identifier is used for description, and after the second device obtains the first combined user feature and the second combined user feature, other information of the target user identifier, such as obtaining a category to which the target user identifier belongs, obtaining a risk level of the target user identifier, and the like, may also be obtained according to the first combined user feature and the second combined user feature, which is not limited in this application.
According to the method provided by the embodiment of the application, the model for extracting the user characteristics is split and is respectively stored in the second device and the first device at the home terminal, the stored characteristic extraction models are respectively called by the second device and the first device, the user information of the target user identification stored by the second device and the first device is subjected to characteristic extraction, the first device provides the user characteristics extracted by the first device for the second device, the original user information stored by the first device does not need to be provided, and leakage of the user information is avoided. And the second equipment combines the user characteristics extracted by the second equipment and the first equipment to obtain combined characteristics, and the combined characteristics comprise the characteristics in the user information stored by the second equipment and the first equipment, so that the information quantity of the user characteristics is enriched, and the accuracy of combining the user characteristics is improved. Compared with the scheme that different user information is stored in a central device and the central device performs feature extraction, the embodiment of the application stores the user information in different devices respectively and performs feature extraction respectively, so that a decentralized feature extraction mode is realized, information leakage caused by the fact that the central device stores the user information is avoided, and the safety of the user information is improved.
And the fifth feature extraction model and the fourth feature extraction model are used for respectively extracting features of the user information in the first device and the second device to obtain a second combined user feature, and the combined feature comprises the features of the user information stored in the first device and the second device, so that the information content of the user feature of the target user identification is enriched, the accuracy of the combined user feature is improved, the user label can be obtained according to the obtained first combined user feature and the second combined user feature, and the accuracy of the user label is improved.
On the basis of the embodiment shown in fig. 6, before calling the first feature extraction model and the fifth feature extraction model to perform feature extraction, the first feature extraction model and the fifth feature extraction model need to be trained, and the specific process is described in the following embodiments.
Fig. 7 is a flowchart of a feature extraction model training method provided in an embodiment of the present application, and is applied to a first device and a second device, as shown in fig. 7, the method includes:
701. the first device sends the fourth feature extraction model to the second device.
In the embodiment of the present application, the first device stores a fifth feature extraction model and a fourth feature extraction model, and the second device stores a first feature extraction model and a second feature extraction model. And training a fifth feature extraction model in the first equipment and a first feature extraction model in the second equipment in a combined manner through the first sample user information in the second equipment and the second sample user information in the first equipment, wherein the fourth feature extraction model and the second feature extraction model do not need to be trained.
The fourth feature extraction model may be generated by the first device, or may be transmitted by another device. The fourth feature extraction model can be obtained by random initialization of the first equipment and can be directly used without training; alternatively, the fourth feature extraction model is a model that has been trained. The second feature extraction model may be generated by the second device or transmitted by another device. The second feature extraction model can be obtained by random initialization of second equipment and can be directly used without training; alternatively, the second feature extraction model is a model that has been trained.
702. And the second equipment receives the fourth feature extraction model sent by the first equipment.
703. The second device sends the second feature extraction model to the first device.
704. And the first equipment receives the second feature extraction model sent by the second equipment.
Steps 701-704 of the present embodiment are similar to steps 601-604 of the above embodiment, and are not described herein again.
The fourth feature extraction model is sent to the second equipment through the first equipment, the second equipment sends the second feature extraction model to the first equipment, so that the subsequent first equipment and the subsequent second equipment can call the feature extraction model sent by the opposite equipment, feature extraction is carried out on sample user information in the own equipment, sample user features matched with the opposite equipment are obtained, the first feature extraction model and the fifth feature extraction model are trained through the obtained sample user features, and after training is completed, combined user features can be obtained through joint use of the first feature extraction model, the second feature extraction model, the fifth feature extraction model and the fourth feature extraction model.
It should be noted that, in the process of training the first feature extraction model and the fifth feature extraction model, the first device has sent the fourth feature extraction model to the second device, and the second device has sent the second feature extraction model to the first device, after the training is completed, when the user tag is obtained according to the embodiment shown in fig. 6 through the trained first feature extraction model and the trained fifth feature extraction model, step 601 and step 604 do not need to be repeatedly executed, the stored second feature extraction model may be directly called by the first device subsequently, and the stored fourth feature extraction model is directly called by the second device.
705. The second device obtains the first sample user information.
The first sample user information is the user information of the sample user corresponding to the sample user identifier, and the first sample user information is stored in the second device. The first sample user information may include user information in multiple dimensions.
In one possible implementation, the obtaining the first sample user information may include: the method comprises the steps of obtaining a plurality of first user identifications stored in first equipment and a plurality of second user identifications stored in second equipment, selecting the same user identification from the plurality of first user identifications and the plurality of second user identifications to be used as a sample user identification, and using user information of the sample user identification stored in the second equipment as first sample user information.
In one possible implementation, the obtaining the first sample user information may include: the method comprises the steps that a first device sends a plurality of stored first user identifications to a second device, the second device receives the first user identifications, the second device obtains a plurality of stored second user identifications, the same user identifications are selected from the first user identifications and the second user identifications to serve as sample user identifications, user information of the stored sample user identifications serves as first sample user information, the sample user identifications are sent to the first device, and the first device obtains second sample user information according to the sample user identifications.
In one possible implementation, the second device obtains a plurality of first sample user information.
706. And the second equipment calls the first feature extraction model to extract the features of the first sample user information to obtain the first sample user features.
707. And the first equipment calls the second feature extraction model, processes the second sample user information to obtain second sample user features, and sends the second sample user features to the second equipment.
708. And the second equipment receives the second sample user characteristics sent by the first equipment, and acquires the first sample combined user characteristics according to the first sample user characteristics and the second sample user characteristics.
709. And the second equipment calls the fourth feature extraction model, processes the first sample user information to obtain a third sample user feature, and sends the third sample user feature to the first equipment.
710. The first device receives a third sample user characteristic transmitted by the second device.
711. And calling a fifth feature extraction model by the first equipment, and performing feature extraction on the second sample user information to obtain fourth sample user features.
And the second sample user information and the first sample user information belong to the same sample user identifier, and the second sample user information is stored in the first equipment.
712. And the first equipment acquires the second sample combined user characteristic according to the fourth sample user characteristic and the third sample user characteristic, and sends the second sample combined user characteristic to the second equipment.
Step 706-712 in the present embodiment is similar to step 605-611 in the above embodiment, and is not described herein again.
713. And the second equipment receives the second sample combined user characteristics sent by the first equipment, and trains the first characteristic extraction model according to the first sample combined user characteristics, the second sample combined user characteristics and the first sample user information.
The first sample combination user characteristic and the second sample combination user characteristic are acquired through the first sample user information and the second sample user information, the inaccuracy of the first feature extraction model can be acquired through the first sample combination user characteristic, the second sample combination user characteristic and the first sample user information, and the first feature extraction model is adjusted according to the inaccuracy so as to improve the accuracy of the first feature extraction model.
In the process of training the first feature extraction model, the above step 706 and 713 are repeatedly executed through the plurality of first sample user information in the first device and the plurality of second sample user information in the second device, and the first feature extraction model is iteratively trained, so that the trained first feature extraction model is obtained.
In one possible implementation, in response to the number of iterations being equal to a preset number, training of the first feature extraction model is stopped. The preset number of times may be any set number of times, such as 20 times, 50 times, and the like.
It should be noted that, in the embodiment of the present application, only the process of training the first feature extraction model is described, and the process of training the fifth feature extraction model in the first device may include: and after the second equipment obtains the first sample combination user characteristic, the second equipment sends the first sample combination user characteristic to the first equipment, the first equipment receives the first sample combination user characteristic sent by the second equipment, and the fifth characteristic extraction model is trained according to the first sample combination user characteristic, the second sample combination user characteristic and the second sample user information.
In one possible implementation, the step 713 may include the following steps 7131 and 7134:
7131. and the second equipment acquires the predicted user label of the sample user identifier according to the first sample combined user characteristic and the second sample combined user characteristic.
The predicted user label is the user label of the sample user corresponding to the sample user identifier and is predicted through the feature extraction model and the sample user information.
Since the first sample combined user feature and the second sample combined user feature both include user features included in different user information and are obtained after feature extraction is performed by different feature extraction models, the predicted user label of the sample user identifier can be obtained by processing the first sample combined user feature and the second sample combined user feature.
In one possible implementation, this step 7131 may comprise: combining user characteristics P according to a first sample1Combining the user characteristic P with the second sample2Obtaining a predicted user label Q of a sample user identity, the first sample combining user features P1Second sample combined user feature P2And a predicted user label Q satisfying the following relationship:
Qsigmoid(P1+P2)
Figure BDA0002535383890000241
where e represents the base of the natural logarithm function and sigmoid () represents the logistic regression function.
7132. The second device determines a difference between the predicted user label and a sample user label corresponding to the first sample user information as a first weight.
The sample user label is a real user label of the sample user corresponding to the first sample user information, and is used for representing the preference of the sample user, for example, the sample user label represents that the sample user belongs to a high-consumption user, a low-consumption user, a travel-loving user, and the like, and the sample user label may be obtained by manual labeling. The sample user label may be obtained by manual labeling or may be sent by the second device.
The first weight is used to represent a difference between a predicted user label of the sample user identification and the sample user label. The predicted user label is obtained through the feature extraction model and the sample user information, and the sample user label is a real user label corresponding to the sample user identifier, so that a difference exists between the predicted user label and the sample user label, and the difference between the predicted user label and the sample user label is determined as a first weight, so that the first feature extraction model can be adjusted according to the first weight in the following process, and the accuracy of the first feature extraction model is improved.
In one possible implementation, this step 7132 may comprise: determining a difference between the predicted user label Q and the sample user label Y as a first weight m, wherein the predicted user label Q, the sample user label Y and the first weight m satisfy the following relationship:
m=Q-Y
7133. and the second equipment acquires a first adjusting parameter of the first feature extraction model according to the first weight and the first sample user information.
Wherein the first adjustment parameter is a parameter for adjusting the first feature extraction model. In the process of obtaining the predicted user label of the sample user identifier, the first feature extraction model performs feature extraction on the first sample user information, so that a first adjustment parameter of the first feature extraction model can be obtained through the first weight and the first sample user information, and the first feature extraction model can be adjusted through the first adjustment parameter subsequently.
In a possible implementation manner, the first weight is a first weight matrix, the first sample user information is a first sample user information matrix, and step 7133 may include: and taking the product of the first weight matrix and the first sample user information matrix as a first adjusting parameter of the first feature extraction model.
In one possible implementation manner, the first adjustment parameter g of the first feature extraction model is obtained according to the first weight m and the first sample user information X11The first weight m, the first sample user information X1 and the first adjustment parameter g1The following relationship is satisfied:
g1=m×X1
7134. and the second equipment adjusts the first feature extraction model according to the first adjustment parameter.
And adjusting the first feature extraction model through the first adjustment parameter so as to reduce the difference between the predicted user label obtained by the first feature extraction model and the sample user label, thereby ensuring that the trained first feature extraction model is accurate.
In one possible implementation, the first feature extraction model is a first feature extraction model matrix W1This step 7134 may comprise: according to the first adjusting parameter g1For the first feature extraction model W1The adjustment is carried out to satisfy the following relation:
W1=W1-g1
in one possible implementation, after step 7134, the method further comprises: and obtaining a loss value of the first feature extraction model according to the predicted user label and the sample user label, and stopping training the first feature extraction model in response to the fact that the loss value is not greater than a preset threshold value.
The preset threshold is any preset value, such as 0.3 or 0.4. The loss value of the first feature extraction model is used for representing the similarity difference between the predicted user label and the sample user label, and the smaller the loss value is, the more accurate the first feature extraction model is. In response to the loss value of the first feature extraction model not being greater than the preset threshold value, indicating that the trained first feature extraction model has satisfied the requirement at that time, the iterative training of the first feature extraction model may be stopped.
In one possible implementation, after step 7132, training the fifth feature extraction model in the first device may include the following steps 7135-7138:
7135. the second device transmits the first weight to the first device.
This step is similar to step 601 and will not be described herein again.
7136. And the first equipment receives the first weight sent by the second equipment, and acquires a second adjusting parameter of the fifth feature extraction model according to the first weight and the second sample user information.
This step is similar to step 7133 described above and will not be described further herein.
7137. And the first equipment adjusts the fifth feature extraction model according to the second adjustment parameter.
This step is similar to step 7134 described above and will not be described further herein.
The second equipment obtains the first weight through the first sample combination user characteristic, the second sample combination user characteristic and the sample user label, and sends the first weight to the second equipment, so that the second equipment can train the fifth feature extraction model according to the first weight, a mode of joint training of the first equipment and the second equipment is realized, synchronous training of the first feature extraction model of the first equipment and the fifth feature extraction model in the second equipment is guaranteed, the second equipment does not need to send the sample user label to the first equipment, leakage of the sample user label is avoided, and safety of the sample user label is provided.
In one possible implementation, after the step 7137, the method further comprises:
7138. and in response to the loss value of the first feature extraction model being larger than a preset threshold value, the second device sends a training stopping notice to the first device.
Wherein the stop training notification is to instruct the first device to stop training the fifth feature extraction model.
7139. And the first equipment receives the training stopping notification sent by the second equipment, and stops training the fifth feature extraction model according to the training stopping notification.
According to the embodiment of the application, the first feature extraction model of the first device and the fifth feature extraction model of the second device are jointly trained through the first device and the second device, so that when the loss value of the first feature extraction model is larger than the preset threshold value, the first feature extraction model meets the requirement, the training of the first feature extraction model needs to be stopped, the fifth feature extraction model also meets the requirement, and the fifth feature extraction model needs to be trained. Therefore, the second device transmits a stop training notification to the first device to cause the first device to stop training the fifth feature extraction model.
It should be noted that, in the embodiment of the present application, the first device trains the first feature extraction model according to the first sample combination user feature, the second sample combination user feature and the first sample user information, but in another embodiment, the steps 701 and 709 and 713 need not be executed, and the second device trains the first feature extraction model according to the first sample combination user feature and the first sample user information.
It should be noted that, in this embodiment of the present application, a first device acquires a sample user tag for description, but in another embodiment, it is not necessary to execute the step of sending, by the first device, the second sample combined user feature to the second device and the step 713, the second device may send the first sample combined user feature to the first device, the first device receives the first sample combined user feature sent by the second device, and the first device trains the fifth feature extraction model according to the first sample combined user feature, the second sample combined user feature, and the second sample user information.
According to the method provided by the embodiment of the application, the first feature extraction model is subjected to combined training through the first sample user information in the first equipment and the second sample user information in the second equipment, so that the sample user information is enriched, and the accuracy of the trained first feature extraction model is improved. In the training process, the first sample user information or the second sample user information does not need to be transmitted, leakage of the first sample user information or the second sample user information is avoided, and safety of the user information is improved.
On the basis of the embodiment shown in fig. 6, when the first device sends the feature extraction model to the second device, the feature extraction model after encryption may be sent, and when the second device sends the feature extraction model to the first device, the feature extraction model after encryption may be sent, and the specific process is described in the following embodiments.
Fig. 8 is a flowchart of a user feature obtaining method provided in an embodiment of the present application, and is applied to a first device and a second device, as shown in fig. 8, the method includes:
801. and the first equipment encrypts the sixth feature extraction model according to the second public key to obtain a fourth feature extraction model, and sends the fourth feature extraction model to the second equipment.
The first device comprises a second public key and a second private key corresponding to the second public key, the second public key is a key used for encrypting data, the second private key is a key used for decrypting the data encrypted by the second public key, and the second public key and the second private key form a key pair for use. The second public key and the second public key may be sent to the first device by other devices, or may be randomly generated by the first device. The sixth feature extraction model is a model for extracting features of the user, and is stored in the first device. When The sixth feature extraction model is encrypted by The second public key, a Paillier (The Paillier Cryptosystem, a homomorphic encryption) algorithm, other homomorphic encryption algorithms, or The like may be used.
The first device encrypts the sixth feature extraction model through the second public key to obtain a fourth feature extraction model, and then transmits the encrypted fourth feature extraction model, so that leakage of the sixth feature extraction model is avoided, and safety of the sixth feature extraction model is improved.
802. And the second equipment receives the fourth feature extraction model sent by the first equipment.
This step is similar to step 602 above and will not be described again.
803. And the second equipment encrypts the third feature extraction model according to the first public key to obtain a second feature extraction model, and sends the second feature extraction model to the first equipment.
The second device comprises a first public key and a first private key corresponding to the first public key, the first public key is a key used for encrypting data, the first private key is a key used for decrypting the data encrypted by the second public key, and the first public key and the first private key form a key pair for use. The first public key and the first private key may be sent to the second device by other devices, or may be randomly generated by the second device. The third feature extraction model is a model for extracting a feature of the user, and the third feature extraction model is stored in the first device.
The first equipment encrypts the third feature extraction model through the first public key to obtain a second feature extraction model, and then transmits the encrypted second feature extraction model, so that the third feature extraction model is prevented from being leaked, and the safety of the sixth feature extraction model is improved.
804. And the first equipment receives the second feature extraction model sent by the second equipment.
This step is similar to step 604 above and will not be described further herein.
805. And the second equipment calls the first feature extraction model to extract the features of the first user information of the stored target user identification to obtain the first user features.
806. And the first equipment calls a second feature extraction model, processes the second user information of the stored target user identification to obtain second user features, and sends the second user features to the second equipment.
This step is similar to step 606 described above and will not be described herein.
807. And the second device receives the second user characteristic sent by the first device, and decrypts the second user characteristic according to the first private key corresponding to the first public key to obtain the decrypted user characteristic.
The second user characteristic is obtained by the first device calling the second characteristic extraction model to process the second user information, and the second characteristic extraction model is obtained by encrypting the first public key, so that the second user characteristic is also the encrypted user characteristic and needs to be decrypted by the second device according to the first private key corresponding to the first public key.
808. And the second equipment combines the first user characteristic and the decrypted user characteristic to obtain a first combined user characteristic of the target user identifier.
This step is similar to step 607 and will not be described again.
809. And the second equipment calls the fourth feature extraction model, processes the first user information to obtain a third user feature, and sends the third user feature to the first equipment.
This step is similar to step 608 described above and will not be described herein.
810. And the first equipment receives the third user characteristic sent by the second equipment, and decrypts the third user characteristic according to a second private key corresponding to the second public key to obtain the decrypted user characteristic.
The third user characteristic is obtained by the second device calling the fourth characteristic extraction model to process the first user information, and the fourth characteristic extraction model is obtained by encrypting the second public key, so that the third user characteristic is also the encrypted user characteristic and needs the first device to decrypt according to the second private key corresponding to the second public key.
811. And calling a fifth feature extraction model by the first equipment to perform feature extraction on the second user information to obtain a fourth user feature.
This step is similar to step 610 described above and will not be described further herein.
812. And the first equipment combines the fourth user characteristic and the decrypted user characteristic to obtain a second combined user characteristic of the target user identifier, and sends the second combined user characteristic to the second equipment.
813. And the second equipment receives the second combined user characteristic sent by the first equipment, and acquires the user label of the target user identification according to the first combined user characteristic and the second combined user characteristic.
The steps 812-813 in the embodiment of the present application are similar to the steps 611-612 in the embodiment described above, and are not described herein again.
It should be noted that, the user tag for obtaining the target user identifier is used for description, and after the second device obtains the first combined user feature and the second combined user feature, other information of the target user identifier, such as obtaining a category to which the target user identifier belongs, obtaining a risk level of the target user identifier, and the like, may also be obtained according to the first combined user feature and the second combined user feature, which is not limited in this application.
According to the method provided by the embodiment of the application, the model for extracting the user characteristics is split and is respectively stored in the second device and the first device at the home terminal, the stored characteristic extraction models are respectively called by the second device and the first device, the user information of the target user identification stored by the second device and the first device is subjected to characteristic extraction, the first device provides the user characteristics extracted by the first device for the second device, the original user information stored by the first device does not need to be provided, and leakage of the user information is avoided. And the second equipment combines the user characteristics extracted by the second equipment and the first equipment to obtain combined characteristics, and the combined characteristics comprise the characteristics in the user information stored by the second equipment and the first equipment, so that the information quantity of the user characteristics is enriched, and the accuracy of combining the user characteristics is improved. Compared with the scheme that different user information is stored in a central device and the central device performs feature extraction, the embodiment of the application stores the user information in different devices respectively and performs feature extraction respectively, so that a decentralized feature extraction mode is realized, information leakage caused by the fact that the central device stores the user information is avoided, and the safety of the user information is improved.
And when the feature extraction model is sent between the first device and the second device, the sent feature extraction model is encrypted, so that the leakage of the feature extraction model is avoided, and the security of the feature extraction model is ensured, thereby improving the security of the sent user feature. .
On the basis of the embodiment shown in fig. 8, before calling the first feature extraction model and the fifth feature extraction model to perform feature extraction, the first feature extraction model and the fifth feature extraction model need to be trained, and the specific process is described in the following embodiments.
Fig. 9 is a flowchart of a feature extraction model training method provided in an embodiment of the present application, and is applied to a computer device, as shown in fig. 9, the method includes:
901. and the first equipment encrypts the sixth feature extraction model according to the second public key to obtain a fourth feature extraction model, and sends the fourth feature extraction model to the second equipment.
902. And the second equipment receives the fourth feature extraction model sent by the first equipment.
903. And the second equipment encrypts the third feature extraction model according to the first public key to obtain a second feature extraction model, and sends the second feature extraction model to the first equipment.
904. And the first equipment receives the second feature extraction model sent by the second equipment.
Steps 901-904 of the present embodiment are similar to steps 801-804, and are not described herein again.
It should be noted that, in the process of training the first feature extraction model and the fifth feature extraction model, the first device has sent the fourth feature extraction model to the second device, and the second device has sent the second feature extraction model to the first device, after the training is completed, when the user tag is obtained according to the embodiment shown in fig. 8 through the trained first feature extraction model and the trained fifth feature extraction model, step 801 and step 804 do not need to be repeatedly executed, the stored second feature extraction model may be directly called by the first device subsequently, and the stored fourth feature extraction model is directly called by the second device.
905. The second device obtains the first sample user information.
This step is similar to step 705 above and will not be described again here.
906. And the second equipment calls the first feature extraction model to extract the features of the first sample user information to obtain the first sample user features.
907. And the first equipment calls the second feature extraction model, processes the second sample user information to obtain a second sample user feature, and sends the second user feature to the second equipment.
908. And the second device receives the second user characteristic sent by the first device, and decrypts the second sample user characteristic according to the first private key corresponding to the first public key to obtain the decrypted sample user characteristic.
909. And the second equipment combines the first sample user characteristic and the decrypted sample user characteristic to obtain a first sample combined user characteristic.
910. And the second equipment calls the fourth feature extraction model, processes the first sample user information to obtain a third sample user feature, and sends the third sample user feature to the first equipment.
911. And the first equipment receives the third user characteristic sent by the second equipment, and decrypts the third sample user characteristic according to a second private key corresponding to the second public key to obtain the decrypted sample user characteristic.
912. And calling a fifth feature extraction model by the first equipment, and performing feature extraction on the second sample user information to obtain fourth sample user features.
913. And the first equipment combines the fourth sample user characteristic and the decrypted sample user characteristic to obtain a second sample combined user characteristic, and sends the second sample combined user characteristic to the second equipment.
The steps 906-913 of the embodiment of the present application are similar to the steps 805-812, and are not described herein again.
914. And the second equipment receives the second sample combined user characteristics sent by the first equipment, and trains the first characteristic extraction model according to the first sample combined user characteristics, the second sample combined user characteristics and the first sample user information.
The first sample combination user characteristic and the second sample combination user characteristic are acquired through the first sample user information and the second sample user information, the inaccuracy of the first feature extraction model can be acquired through the first sample combination user characteristic, the second sample combination user characteristic and the first sample user information, and the first feature extraction model is adjusted according to the inaccuracy so as to improve the accuracy of the first feature extraction model.
In the training process of the first feature extraction model, the above-mentioned step 906 and 914 are repeatedly executed through a plurality of pieces of first sample user information in the first device and a plurality of pieces of second sample user information in the second device, and the first feature extraction model is iteratively trained, so as to obtain the trained first feature extraction model.
In one possible implementation, in response to the number of iterations being equal to a preset number, training of the first feature extraction model is stopped. The preset number of times may be any set number of times, such as 20 times, 50 times, and the like.
It should be noted that, in the embodiment of the present application, only the process of training the first feature extraction model is described, and the process of training the fifth feature extraction model in the first device may include: and after the second equipment obtains the first sample combination user characteristic, the second equipment sends the first sample combination user characteristic to the first equipment, the first equipment receives the first sample combination user characteristic sent by the second equipment, and the fifth characteristic extraction model is trained according to the first sample combination user characteristic, the second sample combination user characteristic and the second sample user information.
In one possible implementation, this step 914 may include the steps of:
9141. and the second equipment acquires the predicted user label of the sample user identifier according to the first sample combined user characteristic and the second sample combined user characteristic.
9142. The second device determines a difference between the predicted user label and a sample user label corresponding to the first sample user information as a first weight.
9143. And the second equipment acquires a first adjusting parameter of the first feature extraction model according to the first weight and the first sample user information.
9144. And the second equipment adjusts the first feature extraction model according to the first adjustment parameter.
Steps 9141-9144 in the embodiment of the present application are similar to steps 7131-7134 described above, and are not described herein again.
In another possible implementation, after step 9142, training the fifth feature extraction model in the first device may include the following steps 9145-9153:
9145. and the second equipment encrypts the first weight according to the first public key to obtain a second weight, and sends the second weight to the first equipment.
This step is similar to step 803 described above and will not be described further herein.
9146. And the first equipment receives the second weight sent by the second equipment, acquires a second adjusting parameter according to the second weight and the second sample user information, and sends the second adjusting parameter to the second equipment.
This step is similar to step 7133 described above and will not be described further herein.
9147. And the second equipment receives the second adjustment parameter sent by the first equipment, decrypts the second adjustment parameter according to the first private key corresponding to the first public key to obtain a third adjustment parameter, and sends the third adjustment parameter to the first equipment.
Since the second adjustment parameter is obtained by the second weight and the second sample user information, the second weight is obtained by encrypting the first weight by the second device according to the first public key, and the second adjustment parameter is also an encrypted parameter, the second device is required to decrypt the second adjustment parameter according to the first private key corresponding to the first public key, and the decrypted third adjustment parameter can be obtained.
This step is similar to step 807 described above and will not be described again.
9148. And the first equipment receives a third adjusting parameter sent by the second equipment, and adjusts the fifth feature extraction model according to the third adjusting parameter.
This step is similar to step 7134 described above and will not be described further herein.
The second device obtains the first weight through the first sample combination user feature, the second sample combination user feature and the sample user label, in order to ensure that the second device can train the fifth feature extraction model according to the first weight and avoid the leakage of the first weight, the second device sends the second weight after the encryption of the first weight to the first device, the first device obtains the encrypted second adjustment parameter according to the second weight, the second device decrypts the second adjustment parameter and sends the decrypted third adjustment parameter to the first device, and the first device trains the fifth feature extraction model according to the third adjustment parameter, so that the joint training process between the first device and the second device is realized, the leakage of the first weight is avoided, and the safety of the first weight is improved.
In one possible implementation, after step 9148, the method further comprises:
9149. and in response to the loss value of the first feature extraction model being larger than a preset threshold value, the second device sends a training stopping notice to the first device.
9150. And the first equipment receives the training stopping notification sent by the second equipment, and stops training the fifth feature extraction model according to the training stopping notification.
Wherein the stop training notification is to instruct the first device to stop training the fifth feature extraction model.
According to the embodiment of the application, the first feature extraction model of the first device and the fifth feature extraction model of the second device are jointly trained through the first device and the second device, so that when the loss value of the first feature extraction model is larger than the preset threshold value, the first feature extraction model meets the requirement, the training of the first feature extraction model needs to be stopped, the fifth feature extraction model also meets the requirement, and the fifth feature extraction model needs to be trained. Therefore, the second device transmits a stop training notification to the first device to cause the first device to stop training the fifth feature extraction model.
According to the method provided by the embodiment of the application, the first feature extraction model is subjected to combined training through the first sample user information in the first equipment and the second sample user information in the second equipment, so that the sample user information is enriched, and the accuracy of the trained first feature extraction model is improved. In the training process, the first sample user information or the second sample user information does not need to be transmitted, leakage of the first sample user information or the second sample user information is avoided, and safety of the user information is improved.
In addition, in the training process of the feature extraction model, when the feature extraction model is sent between the first device and the second device, the sent feature extraction model is encrypted, so that the leakage of the feature extraction model is avoided, and the safety of the feature extraction model is ensured. The first weight of the adjusting model is encrypted, so that the leakage of the first weight is avoided, and the safety of the first weight is improved.
On the basis of the embodiment shown in fig. 6, in the process of transmitting the user feature and the combined user feature between the first device and the second device, a noise feature may be added to the user feature and the combined user feature to avoid leakage of the user feature, and the specific process is described in the following embodiments.
Fig. 10 is a flowchart of a user feature obtaining method provided in an embodiment of the present application, and is applied to a first device and a second device, as shown in fig. 10, the method includes:
1001. the first device sends the fourth feature extraction model to the second device.
1002. And the second equipment receives the fourth feature extraction model sent by the first equipment.
1003. The second device sends the second feature extraction model to the first device.
1004. And the first equipment receives the second feature extraction model sent by the second equipment.
1005. And the second equipment calls the first feature extraction model to extract the features of the first user information of the stored target user identification to obtain the first user features.
1006. And calling the second feature extraction model by the first equipment, and performing feature extraction on the second user information to obtain a fifth user feature.
In the embodiment of the present application, the step 1001-1006 is similar to the step 601-606, and is not described herein again.
1007. And the first equipment performs fusion processing on the fifth user characteristic and the first noise characteristic to obtain a second user characteristic, and sends the second user characteristic to the second equipment.
The first noise feature is randomly generated by the first device, and the first noise may be represented by a vector, a matrix, or other forms.
Because the fifth user characteristic is obtained by extracting the characteristic of the second user information, the first device fuses the fifth user characteristic and the first noise characteristic, so that the obtained second user characteristic contains the first noise characteristic, even if the second user characteristic is leaked, the second user information cannot be obtained through the second user characteristic, the risk of leakage of the second user information is reduced, and the safety of the second user information is improved.
1008. And the second equipment receives the second user characteristic sent by the first equipment, and performs combination processing on the first user characteristic and the second user characteristic to obtain a fifth combined user characteristic.
This step is similar to step 607 and will not be described again.
1009. And the second equipment performs fusion processing on the fifth combined user characteristic and the fourth noise characteristic to obtain the first combined user characteristic of the target user identifier.
Wherein the fourth noise characteristic is randomly generated by the second device, and the fourth noise may be represented by a vector, a matrix, or in other forms. The third noise characteristic is opposite to the fourth noise characteristic, and the first combined user characteristic and the fourth noise characteristic are fused to enable the obtained first combined user characteristic to contain the fourth noise characteristic, so that the third noise in the second combined user characteristic can be removed conveniently in the follow-up process.
1010. And the second equipment calls a fourth feature extraction model to extract features of the first user information to obtain sixth user features.
This step is similar to step 608 described above and will not be described herein.
1011. And the second equipment performs fusion processing on the sixth user characteristic and the third noise characteristic to obtain a third user characteristic, and sends the third user characteristic to the first equipment.
Wherein the third noise characteristic is randomly generated by the second device, the third noise can be represented by a vector, a matrix or other forms, and the third noise characteristic is opposite to the fourth noise characteristic.
The sixth user characteristic is obtained by extracting the characteristic of the first user information, and the second device fuses the sixth user characteristic and the third noise characteristic, so that the obtained third user characteristic contains the third noise characteristic, and even if the third user characteristic is leaked, the first user information cannot be obtained through the third user characteristic, the risk of leakage of the first user information is reduced, and the safety of the first user information is improved.
1012. The first device receives a third sample user characteristic transmitted by the second device.
1013. And calling a fifth feature extraction model by the first equipment to perform feature extraction on the second user information to obtain a fourth user feature.
1014. And the first equipment combines the fourth user characteristic and the third user characteristic to obtain a third combined user characteristic.
Steps 1012 and 1014 in the embodiment of the present application are similar to the steps 609 and 611 described above, and are not described herein again.
1015. And the first equipment performs fusion processing on the third combined user characteristic and the second noise characteristic to obtain a second combined user characteristic of the target user identifier, and sends the second combined user characteristic to the second equipment.
Wherein the first noise characteristic is opposite to the second noise characteristic. And fusing the third combined user characteristic and the second noise characteristic to obtain a second combined user characteristic containing the second noise characteristic, so that the first noise in the first combined user characteristic can be removed conveniently in the follow-up process.
1016. And the second equipment receives the second combined user characteristic sent by the first equipment, and performs combined processing on the first combined user characteristic and the second combined user characteristic to obtain a fourth combined user characteristic.
Since the first noise feature is merged into the second user feature, the first noise feature is merged into the fifth combined user feature by performing the combination processing on the first user feature and the second user feature, and the first noise feature is merged into the first combined user feature by performing the fusion processing on the fifth combined user feature and the fourth noise feature.
Since the third noise feature is merged into the third user feature, the third noise feature is merged into the obtained third combined user feature by performing combination processing on the third user feature and the fourth user feature, and the second combined user feature is obtained by merging the third combined user feature and the second noise feature, so that the third noise feature and the second noise feature are merged into the second combined user feature.
Because the first noise feature is opposite to the second noise feature and the third noise feature is opposite to the fourth noise feature, the first combined user feature and the second combined user feature are combined, so that the first noise feature and the second noise feature are offset, the third noise feature and the fourth noise feature are offset, the denoising processing of the combined user feature is realized, and the obtained fourth combined user feature does not contain the noise feature.
1017. And the second equipment acquires the user label of the target user identification according to the fourth group user characteristic.
This step is similar to step 612 described above and will not be described further herein.
According to the method provided by the embodiment of the application, the model for extracting the user characteristics is split and is respectively stored in the second device and the first device at the home terminal, the stored characteristic extraction models are respectively called by the second device and the first device, the user information of the target user identification stored by the second device and the first device is subjected to characteristic extraction, the first device provides the user characteristics extracted by the first device for the second device, the original user information stored by the first device does not need to be provided, and leakage of the user information is avoided. And the second equipment combines the user characteristics extracted by the second equipment and the first equipment to obtain combined characteristics, and the combined characteristics comprise the characteristics in the user information stored by the second equipment and the first equipment, so that the information quantity of the user characteristics is enriched, and the accuracy of combining the user characteristics is improved. Compared with the scheme that different user information is stored in a central device and the central device performs feature extraction, the embodiment of the application stores the user information in different devices respectively and performs feature extraction respectively, so that a decentralized feature extraction mode is realized, information leakage caused by the fact that the central device stores the user information is avoided, and the safety of the user information is improved.
In addition, in the interaction process of the first device and the second device, noise is added to the user characteristics and the combined user characteristics transmitted between the first device and the second device, so that the leakage of user information caused by the leakage of the user characteristics and the combined user characteristics is avoided, and the safety of the user information is improved.
On the basis of the embodiment shown in fig. 10, before calling the first feature extraction model and the fifth feature extraction model to perform feature extraction, the first feature extraction model and the fifth feature extraction model need to be trained, and the specific process is described in the following embodiments.
Fig. 11 is a flowchart of a user feature obtaining method provided in an embodiment of the present application, and is applied to a first device and a second device, as shown in fig. 11, the method includes:
1101. the first device sends the fourth feature extraction model to the second device.
1102. And the second equipment receives the fourth feature extraction model sent by the first equipment.
1103. The second device sends the second feature extraction model to the first device.
1104. And the first equipment receives the second feature extraction model sent by the second equipment.
Steps 1101-1104 in the embodiment of the application are similar to steps 601-604, and are not described herein again.
It should be noted that, in the process of training the first feature extraction model and the fifth feature extraction model, the first device has sent the fourth feature extraction model to the second device, and the second device has sent the second feature extraction model to the first device, after the training is completed, when the user tag is obtained according to the embodiment shown in fig. 10 through the trained first feature extraction model and the trained fifth feature extraction model, step 1001 and step 1004 do not need to be repeatedly executed, the stored second feature extraction model may be directly called by the first device subsequently, and the stored fourth feature extraction model is directly called by the second device.
1105. The second device obtains the first sample user information.
This step is similar to step 705 above and will not be described again here.
1106. And the second equipment calls the first feature extraction model to extract the features of the first sample user information to obtain the first sample user features.
1107. And calling a second feature extraction model by the first equipment, and performing feature extraction on the second user information to obtain a fifth sample user feature.
1108. And the first equipment performs fusion processing on the fifth sample user characteristic and the first noise characteristic to obtain a second sample user characteristic, and sends the second sample user characteristic to the second equipment.
1109. And the second equipment receives the second sample user characteristics sent by the first equipment, and performs combined processing on the first sample user characteristics and the second sample user characteristics to obtain fifth sample combined user characteristics.
1110. And the second equipment performs fusion processing on the fifth sample combination user characteristic and the fourth noise characteristic to obtain a first sample combination user characteristic.
1111. And calling a fourth feature extraction model by the second equipment, and performing feature extraction on the first user information to obtain sixth sample user features.
1112. And the second equipment performs fusion processing on the sixth sample user characteristic and the third noise characteristic to obtain a third sample user characteristic, and sends the third sample user characteristic to the first equipment.
1113. The first device receives a third sample user characteristic transmitted by the second device.
1114. And calling a fifth feature extraction model by the first equipment to perform feature extraction on the second user information to obtain a fourth sample user feature.
1115. And the first equipment performs combination processing on the fourth sample user characteristics and the third sample user characteristics to obtain third sample combined user characteristics.
1116. And the first equipment performs fusion processing on the third sample combined user characteristic and the second noise characteristic to obtain a second sample combined user characteristic, and sends the second sample combined user characteristic to the second equipment.
1117. And the second equipment receives the second sample combined user characteristic sent by the first equipment, and performs combined processing on the first sample combined user characteristic and the second sample combined user characteristic to obtain a fourth sample combined user characteristic.
The steps 1106-1117 in the embodiment of the present application are similar to the steps 1005-1016 described above, and are not described herein again.
1118. And the second equipment trains the first characteristic extraction model according to the fourth sample combination user characteristic and the first sample user information.
This step is similar to 713 above and will not be described further herein.
According to the method provided by the embodiment of the application, the first feature extraction model is subjected to combined training through the first sample user information in the first equipment and the second sample user information in the second equipment, so that the sample user information is enriched, and the accuracy of the trained first feature extraction model is improved. In the training process, the first sample user information or the second sample user information does not need to be transmitted, leakage of the first sample user information or the second sample user information is avoided, and safety of the user information is improved.
In addition, in the training process of the feature extraction model, noise features are added to the user features and the combined user features transmitted between the first device and the second device, so that the user information leakage caused by the leakage of the user features and the combined user features is avoided, and the safety of the user information is improved.
The scheme of obtaining the user tag by encrypting the model in the embodiment of fig. 8 is combined with the scheme of obtaining the user tag by adding the noise feature in fig. 10, so that the sent feature extraction model can be encrypted and the noise feature can be added to the sent user feature and the combined user feature in the process of obtaining the user tag by combining the first device and the second device, which is described in detail in the following embodiments.
Fig. 12 is a flowchart of a user feature obtaining method provided in an embodiment of the present application, and is applied to a first device and a second device, as shown in fig. 12, the method includes:
1201. and the first equipment encrypts the sixth feature extraction model according to the second public key to obtain a fourth feature extraction model, and sends the fourth feature extraction model to the second equipment.
1202. And the second equipment receives the fourth feature extraction model sent by the first equipment.
1203. And the second equipment encrypts the third feature extraction model according to the first public key to obtain a second feature extraction model, and sends the second feature extraction model to the first equipment.
1204. And the first equipment receives the second feature extraction model sent by the second equipment.
1205. And the second equipment calls the first feature extraction model to extract the features of the first user information of the stored target user identification to obtain the first user features.
1206. And calling the second feature extraction model by the first equipment, and performing feature extraction on the second user information to obtain a fifth user feature.
1207. And the first equipment performs fusion processing on the fifth user characteristic and the first noise characteristic to obtain a second user characteristic, and sends the second user characteristic to the second equipment.
1208. And the second device receives the second user characteristic sent by the first device, and decrypts the second user characteristic according to the first private key corresponding to the first public key to obtain the decrypted user characteristic.
1209. And the second equipment combines the first user characteristic and the decrypted user characteristic to obtain a fifth combined user characteristic.
1210. And the second equipment performs fusion processing on the fifth combined user characteristic and the fourth noise characteristic to obtain the first combined user characteristic of the target user identifier.
1211. And the second equipment calls a fourth feature extraction model to extract features of the first user information to obtain sixth user features.
1212. And the second equipment performs fusion processing on the sixth user characteristic and the third noise characteristic to obtain a third user characteristic, and sends the third user characteristic to the first equipment.
1213. And the first equipment receives the third sample user characteristic sent by the second equipment, and decrypts the third user characteristic according to a second private key corresponding to the second public key to obtain the decrypted user characteristic.
1214. And calling a fifth feature extraction model by the first equipment to perform feature extraction on the second user information to obtain a fourth user feature.
1215. And the first equipment combines the fourth user characteristic and the decrypted user characteristic to obtain a third combined user characteristic.
1216. And the first equipment performs fusion processing on the third combined user characteristic and the second noise characteristic to obtain a second combined user characteristic of the target user identifier, and sends the second combined user characteristic to the second equipment.
1217. And the second equipment receives the second combined user characteristic sent by the first equipment, and performs combined processing on the first combined user characteristic and the second combined user characteristic to obtain a fourth combined user characteristic.
1218. And the second equipment acquires the user label of the target user identification according to the fourth group user characteristic.
For example, according to the scheme in the embodiment of fig. 12, a specific flow of the user tag is obtained. The first device comprises a fifth feature extraction model W2And a sixth feature extraction model R1The second device comprises a first feature extraction model W1And a third feature extraction model R2The first device generates a second public key PK2 and a second private key SK2, the second device generates a first public key PK1 and a first private key SK1, and the first device stores second user information X of the target user identification2The second device stores the first user information X of the target user identification1. The second device extracts a second feature extraction model PK1 (R)2) Sending the feature extraction model PK2 (R) to the first device, storing the feature extraction model PK by the first device1) And sending the data to the second equipment, and storing the data by the second equipment.
The second device extracts the model W through the first feature1And first user information X1Obtaining a first user characteristic W1*X1
The first device extracts the model PK1 (R) through the second feature2) And second user information X2To obtain a fifth user profile PK1 (R)2)*X2The fifth user profile PK1 (R)2)*X2Sending to the second device a fifth user profile PK1 (R)2)*X2Fused with the first noise signature, noise2, to obtain a second user signature PK1 (R)2)*X2-noise2, second user profile PK1 (R)2)*X2-noise2 to the second device.
The second device receives a second user profile PK1 (R) sent by the first device2)*X2-noise2, pair of second user profiles PK1 (R) according to the first private key SK12)*X2-noise2, obtaining a decrypted user profile R2*X2-noise2 for the first user characteristic W1*X1With decrypted user characteristics R2*X2-noise2 to obtain a fifth combined user characteristic W1*W1+R2*X2-noise2, fusing the fifth combination user characteristic with the fourth noise characteristic noise1 to obtain the first combination user characteristic W1*X1+R2*X2-noise2+noise1。
The second device invokes the fourth feature extraction model PK2 (R)1) For the first user information X1Carrying out feature extraction to obtain a sixth user feature PK2 (R)1)*X1The sixth user profile PK2 (R)1)*X1And the third noise-noise 1 is fused to obtain a third user characteristic PK2 (R)1)*X1Noise1, the third user profile PK2 (R)1)*X1-noise1 to the first device.
The first device matches the third user profile PK2 (R) with the second private key SK21)*X1-noise1, obtaining a decrypted user profile R1*X1-noise1, invoking the fifth feature extraction model W2For the second user information X2Extracting the features to obtain a fourth user feature W2*X2The fourth user characteristic W2*X2With decrypted user characteristics R1*X1-noise1 to obtain a third combined user profile W2*X2+R1*X1-noise1, combining the third set of user characteristics W2*X2+R1*X1-noise1 is merged with second noise2 resulting in a second combined user profile W2*X2+R1*X1-noise1+ noise2, sending the second combined user feature to the second device,
the second device combines the first combined user characteristics W1*X1+R2*X2-noise2+ noise1 in combination with a second combined user feature W2*X2+R1*X1-noise1+ noise2 to obtain a fourth combined user characteristic W1*X1+R2*X2+W2*X2+R1*X1According to a fourth group user characteristic W1*X1+R2*X2+W2*X2+R1*X1First user label for obtaining target user identification
On the basis of the embodiment shown in fig. 12, the process of performing joint training on the first feature extraction model and the fifth feature extraction model can be obtained by combining the scheme of joint training in the embodiment of fig. 9 and the scheme of joint training in the embodiment of fig. 11.
It should be noted that, based on the above embodiment, the process of training the first feature extraction model and the fifth feature extraction model can be obtained by combining the steps 901-914 and 1101-1118 of the above embodiment.
In addition, after the second device obtains the first weight, in order to enable the first device to train the fifth feature extraction model, the method may include:
1219. and the second equipment encrypts the first weight according to the first public key to obtain a second weight, and sends the second weight to the first equipment.
1220. And the first equipment receives the second weight sent by the second equipment, and acquires a fourth adjusting parameter according to the second weight and the second sample user information.
1221. And the first equipment performs fusion processing on the fourth adjustment parameter and the fifth noise characteristic to obtain a second adjustment parameter, and sends the second adjustment parameter to the second equipment.
Wherein the fifth noise characteristic is randomly generated by the first device, and the first noise may be represented by a vector, a matrix, or other forms.
1222. And the second equipment receives the second adjustment parameter sent by the first equipment, decrypts the second adjustment parameter according to the first private key corresponding to the first public key to obtain a third adjustment parameter, and sends the third adjustment parameter to the first equipment.
The second adjustment parameter is obtained through the second weight, the second weight is obtained by encrypting the first weight by the second device according to the first public key, and the second adjustment parameter is also an encrypted parameter, so that the second device is required to decrypt the second adjustment parameter according to the first private key corresponding to the first public key to obtain a decrypted third adjustment parameter, and the second device sends the decrypted third adjustment parameter to the first device, so that the first device can adjust the fifth feature extraction model according to the third adjustment parameter.
1223. And the first equipment receives the third adjustment parameter sent by the second equipment, fuses the third adjustment parameter and the sixth noise feature, and adjusts the fifth feature extraction model according to the fused adjustment parameter.
Wherein the fifth noise characteristic is opposite to the sixth noise characteristic. And because the second adjustment parameter contains fifth noise, the third adjustment parameter contains fifth noise characteristics, and the third adjustment parameter and the sixth noise characteristics are fused, so that the obtained fused adjustment parameter does not contain noise characteristics, and the denoising processing of the adjustment parameter is realized.
Because the fourth adjustment parameter is obtained through the second weight and the second sample user information, the fourth adjustment parameter and the fifth noise feature are fused, so that the transmitted second adjustment parameter contains the noise feature, the second device cannot obtain the second sample user information according to the second adjustment parameter, and the security of the second sample user information is improved.
The specific process of adjusting the fifth feature extraction model according to the above-mentioned step 1219-1223 is described. For example, if the first weight is m and the first public key is PK1, the second weight acquired by the second device is PK1(m), and the second weight PK1(m) is sent to the first device.
The first device passes the second weight PK1(m) and the second sample user information X2Obtaining a fourth adjusting parameter PK1(m) × X2The fourth adjustment parameter PK1(m) × X2The second adjustment parameter PK1(m) is obtained by performing fusion processing with the fifth noise sigma)*X2And + sigma, sending the second adjustment parameter to the second device.
The second device decrypts the second adjustment parameter according to the first private key SK1 to obtain a third adjustment parameter m X2+ σ, the third adjustment parameter m X2+ σ is sent to the first device.
The first device adjusts the third adjustment parameter m X2Carrying out fusion processing on the + sigma and the sixth noise characteristic-sigma to obtain a fused adjustment parameter g2Is m X2According to the fused adjustment parameter g2For the fifth feature extraction model W2The adjusted fifth feature model can be represented as W2=W2-g2
Fig. 13 is a schematic structural diagram of a user feature obtaining apparatus according to an embodiment of the present application, and as shown in fig. 13, the apparatus includes:
the feature extraction module 1301 is configured to invoke a first feature extraction model, perform feature extraction on first user information of the stored target user identifier, and obtain a first user feature;
a feature receiving module 1302, configured to receive a second user feature sent by the first device, where the second user feature is obtained by calling, by the first device, a second feature extraction model and processing second user information of the stored target user identifier, where the first feature extraction model and the second feature extraction model are different models used for extracting user features;
and the combined feature obtaining module 1303 is configured to obtain the first combined user feature of the target user identifier according to the first user feature and the second user feature.
In one possible implementation manner, as shown in fig. 14, the second feature extraction model is a model that is encrypted according to the first public key, and the combined feature obtaining module 1303 includes:
a decryption processing unit 1331, configured to perform decryption processing on the second user characteristic according to the first private key corresponding to the first public key, to obtain a decrypted user characteristic;
a first combination processing unit 1332, configured to perform combination processing on the first user characteristic and the decrypted user characteristic to obtain a first combination user characteristic.
In another possible implementation manner, as shown in fig. 14, the apparatus further includes:
the encryption processing module 1304 is configured to encrypt the third feature extraction model according to the first public key to obtain a second feature extraction model;
a model sending module 1305, configured to send the second feature extraction model to the first device.
In another possible implementation manner, as shown in fig. 14, the apparatus further includes:
the information processing module 1306 is configured to invoke a fourth feature extraction model, and process the first user information to obtain a third user feature;
a feature sending module 1307, configured to send the third user feature to the first device, where the first device is configured to obtain a second combined user feature according to a fourth user feature and the third user feature, and the fourth user feature is obtained by the first device invoking a fifth feature extraction model to perform feature extraction on the second user information;
a feature receiving module 1302, configured to receive the second combined user feature sent by the first device.
In another possible implementation manner, as shown in fig. 14, the apparatus further includes:
a model receiving module 1308, configured to receive the fourth feature extraction model sent by the first device.
In another possible implementation manner, the first device is configured to encrypt the sixth feature extraction model according to the second public key to obtain a fourth feature extraction model;
the first device is used for decrypting the third user characteristic according to a second private key corresponding to the second public key to obtain the decrypted user characteristic; and combining the fourth user characteristic and the decrypted user characteristic to obtain a second combined user characteristic.
In another possible implementation manner, the second user characteristic is obtained by fusing a fifth user characteristic and the first noise characteristic by the first device, and the fifth user characteristic is obtained by calling the second characteristic extraction model by the first device and performing characteristic extraction on the second user information;
the second combined user characteristic is obtained by fusing a third combined user characteristic and a second noise characteristic by the first equipment, the third combined user characteristic is obtained by combining a fourth user characteristic and a third user characteristic by the first equipment, and the first noise characteristic is opposite to the second noise characteristic;
the device still includes:
a combination processing module 1309, configured to perform combination processing on the first combination user characteristic and the second combination user characteristic to obtain a fourth combination user characteristic.
In another possible implementation, as shown in fig. 14, the information processing module 1306 includes:
the feature extraction unit 1361 is configured to invoke the fourth feature extraction model, perform feature extraction on the first user information, and obtain a sixth user feature;
a first fusion processing unit 1362, configured to perform fusion processing on the sixth user characteristic and the third noise characteristic to obtain a third user characteristic;
the combined feature obtaining module 1303 includes:
a second combination processing unit 1333, configured to perform combination processing on the first user characteristic and the second user characteristic to obtain a fifth combination user characteristic;
a second fusion processing unit 1334, configured to perform fusion processing on the fifth combination user characteristic and the fourth noise characteristic to obtain the first combination user characteristic, where the third noise characteristic is opposite to the fourth noise characteristic.
In another possible implementation manner, as shown in fig. 14, the apparatus further includes:
a sample obtaining module 1310 for obtaining first sample user information;
the feature extraction module 1301 is further configured to invoke a first feature extraction model, perform feature extraction on the first sample user information, and obtain a first sample user feature;
the feature receiving module 1302 is further configured to receive a second sample user feature, where the second sample user feature is obtained by calling, by the first device, a second feature extraction model and processing second sample user information, where the first sample user information and the second sample user information belong to the same sample user identifier;
the combined feature obtaining module 1303 is further configured to obtain a first sample combined user feature according to the first sample user feature and the second sample user feature;
the model training module 1311 is configured to train the first feature extraction model according to the first sample combination user feature and the first sample user information.
In another possible implementation manner, as shown in fig. 14, the apparatus further includes:
the information processing module 1306 is further configured to invoke a fourth feature extraction model, and process the first sample user information to obtain a third sample user feature;
the feature sending module 1307 is further configured to send the third sample user feature to the first device, where the first device is configured to obtain a second sample combined user feature according to a fourth sample user feature and the third sample user feature, and the fourth sample user feature is obtained by the first device invoking a fifth feature extraction model to perform feature extraction on second sample user information;
the feature receiving module 1302 is further configured to receive a second sample combination user feature sent by the first device;
model training module 1311, comprising:
a model training unit 13111, configured to train the first feature extraction model according to the first sample combination user feature, the second sample combination user feature, and the first sample user information.
In another possible implementation manner, the model training unit 13111 is configured to obtain a predicted user label of the sample user identifier according to the first sample combination user feature and the second sample combination user feature; determining the difference between the predicted user label and the sample user label corresponding to the first sample user information as a first weight; acquiring a first adjustment parameter of the first feature extraction model according to the first weight and the first sample user information; and adjusting the first feature extraction model according to the first adjustment parameter.
In another possible implementation manner, as shown in fig. 14, the apparatus further includes:
a weight encryption module 1312, configured to encrypt the first weight according to the first public key to obtain a second weight;
a weight sending module 1313, configured to send the second weight to the first device, where the first device is configured to obtain a second adjustment parameter according to the second weight and the second sample user information;
a parameter receiving module 1314, configured to receive a second adjustment parameter sent by the first device;
a parameter sending module 1315, configured to decrypt the second adjustment parameter according to the first private key corresponding to the first public key, to obtain a third adjustment parameter;
and the model adjusting module 1316, configured to send the third adjusting parameter to the first device, where the first device is configured to adjust the fifth feature extraction model according to the third adjusting parameter.
In another possible implementation manner, the first device is configured to obtain a fourth adjustment parameter according to the second weight and the second sample user information, and perform fusion processing on the fourth adjustment parameter and the fifth noise characteristic to obtain a second adjustment parameter;
the first device is used for carrying out fusion processing on the third adjustment parameter and the sixth noise feature, and adjusting the fifth feature extraction model according to the fused adjustment parameter, wherein the sixth noise feature is opposite to the fifth noise feature.
In another possible implementation manner, as shown in fig. 14, the apparatus further includes:
a loss value obtaining module 1317, configured to obtain a loss value of the first feature extraction model according to the predicted user tag and the sample user tag;
the model training unit 13111 is further configured to stop training the first feature extraction model in response to the loss value not being greater than the preset threshold.
In another possible implementation, as shown in fig. 14, the apparatus further includes;
a notification sending module 1318, configured to send a training stopping notification to the first device in response to the loss value being not greater than the preset threshold, where the first device is configured to stop training the fifth feature extraction model according to the training stopping notification.
Fig. 15 shows a schematic structural diagram of a terminal 1500 according to an exemplary embodiment of the present application. The terminal 1500 is configured to perform the steps performed by the terminal in the user characteristic obtaining method.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit, image Processing interactor) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for being possessed by processor 1501 for implementing the user feature acquisition methods provided by method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1504, a display 1505, a camera assembly 1506, an audio circuit 1507, a positioning assembly 1508, and a power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, provided on the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal 1500, and the rear camera is disposed on the rear surface of the terminal 1500. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is used to locate a current geographic position of the terminal 1500 to implement navigation or LBS (location based Service). The positioning component 1508 may be a positioning component based on the united states GPS (global positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 1509 is used to power the various components in terminal 1500. The power supply 1509 may be alternating current, direct current, disposable or rechargeable. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the display screen 1505 to display the user interface in a landscape view or a portrait view based on the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of an application or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a tilt operation of the user), image stabilization at the time of photographing, application control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side frame of terminal 1500 and/or underneath display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the display screen 1505, the processor 1501 controls the operability control on the UI interface in accordance with the pressure operation of the user on the display screen 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1414, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by processor 1501 to have relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of display screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the display screen 1505 is adjusted down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also known as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 16 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1600 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors (CPUs) 1601 and one or more memories 1602, where the memory 1602 stores at least one instruction, and the at least one instruction is loaded and executed by the processors 1601 to implement the methods provided by the method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The server 1600 may be used to perform the user feature acquisition method described above.
The embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor, so as to implement the user feature obtaining method in the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is loaded and executed by a processor, so as to implement the user feature obtaining method in the foregoing embodiment.
Embodiments of the present application also provide a computer program product including computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of the embodiments described above.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and should not be construed as limiting the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A user feature acquisition method is characterized by comprising the following steps:
calling a first feature extraction model, and performing feature extraction on first user information of the stored target user identification to obtain first user features;
receiving a second user characteristic sent by first equipment, wherein the second user characteristic is obtained by calling a second characteristic extraction model by the first equipment and processing second user information of the stored target user identifier, and the first characteristic extraction model and the second characteristic extraction model are different models for extracting the user characteristic;
and acquiring a first combined user characteristic of the target user identifier according to the first user characteristic and the second user characteristic.
2. The method according to claim 1, wherein the second feature extraction model is a model obtained by performing encryption processing according to a first public key, and the obtaining the first combined user feature of the target user identifier according to the first user feature and the second user feature comprises:
according to a first private key corresponding to the first public key, carrying out decryption processing on the second user characteristic to obtain a decrypted user characteristic;
and combining the first user characteristic and the decrypted user characteristic to obtain the first combined user characteristic.
3. The method of claim 2, wherein prior to receiving the second user profile transmitted by the first device, the method further comprises:
encrypting a third feature extraction model according to the first public key to obtain a second feature extraction model;
sending the second feature extraction model to the first device.
4. The method of claim 1, further comprising:
calling a fourth feature extraction model, and processing the first user information to obtain a third user feature;
sending the third user characteristic to the first equipment, wherein the first equipment is used for acquiring a second combined user characteristic according to a fourth user characteristic and the third user characteristic, and the fourth user characteristic is obtained by calling a fifth characteristic extraction model from the first equipment to perform characteristic extraction on the second user information;
receiving the second combined user characteristic transmitted by the first device.
5. The method of claim 4, wherein before invoking the fourth feature extraction model to process the first user information to obtain a third user feature, the method further comprises:
and receiving the fourth feature extraction model sent by the first equipment.
6. The method according to claim 5, wherein the first device is configured to perform encryption processing on a sixth feature extraction model according to a second public key to obtain the fourth feature extraction model;
the first device is used for decrypting the third user characteristic according to a second private key corresponding to the second public key to obtain a decrypted user characteristic; and combining the fourth user characteristic and the decrypted user characteristic to obtain the second combined user characteristic.
7. The method according to claim 4, wherein the second user characteristic is obtained by the first device by fusing a fifth user characteristic and a first noise characteristic, and the fifth user characteristic is obtained by the first device invoking the second characteristic extraction model to perform characteristic extraction on the second user information;
the second combined user feature is obtained by the first device by fusing a third combined user feature and a second noise feature, the third combined user feature is obtained by the first device by combining the fourth user feature and the third user feature, and the first noise feature is opposite to the second noise feature;
after receiving the second combined user feature transmitted by the first device, the method further includes:
and combining the first combined user characteristic and the second combined user characteristic to obtain a fourth combined user characteristic.
8. The method of claim 4, wherein said invoking a fourth feature extraction model to process the first user information to obtain a third user feature comprises:
calling the fourth feature extraction model, and performing feature extraction on the first user information to obtain a sixth user feature;
performing fusion processing on the sixth user characteristic and a third noise characteristic to obtain a third user characteristic;
the obtaining of the first combined user characteristic of the target user identifier according to the first user characteristic and the second user characteristic includes:
combining the first user characteristic and the second user characteristic to obtain a fifth combined user characteristic;
and performing fusion processing on the fifth combined user characteristic and a fourth noise characteristic to obtain the first combined user characteristic, wherein the third noise characteristic is opposite to the fourth noise characteristic.
9. The method of claim 1, further comprising:
acquiring first sample user information;
calling the first feature extraction model, and performing feature extraction on the first sample user information to obtain first sample user features;
receiving a second sample user characteristic, wherein the second sample user characteristic is obtained by calling the second characteristic extraction model by the first device and processing second sample user information, and the first sample user information and the second sample user information belong to the same sample user identifier;
acquiring a first sample combination user characteristic according to the first sample user characteristic and the second sample user characteristic;
and training the first feature extraction model according to the first sample combination user features and the first sample user information.
10. The method of claim 9, further comprising:
calling a fourth feature extraction model, and processing the first sample user information to obtain a third sample user feature;
sending the third sample user feature to the first device, wherein the first device is used for obtaining a second sample combined user feature according to a fourth sample user feature and the third sample user feature, and the fourth sample user feature is obtained by calling a fifth feature extraction model by the first device to perform feature extraction on the second sample user information;
receiving the second sample combined user characteristics sent by the first equipment;
the training the first feature extraction model according to the first sample combination user feature and the first sample user information includes:
and training the first feature extraction model according to the first sample combined user feature, the second sample combined user feature and the first sample user information.
11. The method of claim 10, wherein training the first feature extraction model based on the first sample combined user features, second sample combined user features, and the first sample user information comprises:
obtaining a predicted user label of the sample user identifier according to the first sample combined user characteristic and the second sample combined user characteristic;
determining a difference between the predicted user label and a sample user label corresponding to the first sample user information as a first weight;
acquiring a first adjustment parameter of the first feature extraction model according to the first weight and the first sample user information;
and adjusting the first feature extraction model according to the first adjustment parameter.
12. A user feature acquisition method is characterized by comprising the following steps:
the second equipment calls a first feature extraction model to extract features of the first user information of the stored target user identification to obtain first user features;
the first equipment calls a second feature extraction model, processes second user information of the target user identification which is stored to obtain second user features, and sends the second user features to the second equipment, wherein the first feature extraction model and the second feature extraction model are different models for extracting the user features;
the second device receiving the second user characteristic; and acquiring a first combined user characteristic of the target user identifier according to the first user characteristic and the second user characteristic.
13. A user feature acquisition apparatus, characterized in that the apparatus comprises:
the characteristic extraction module is used for calling a first characteristic extraction model and extracting the characteristics of the first user information of the stored target user identification to obtain first user characteristics;
the characteristic receiving module is used for receiving a second user characteristic sent by first equipment, the second user characteristic is obtained by calling a second characteristic extraction model by the first equipment and processing second user information of the stored target user identifier, and the first characteristic extraction model and the second characteristic extraction model are different models for extracting the user characteristic;
and the combined feature acquisition module is used for acquiring the first combined user feature of the target user identifier according to the first user feature and the second user feature.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, the at least one instruction being loaded and executed by the processor to implement the user characteristic acquisition method of any one of claims 1 to 11.
15. A computer-readable storage medium having stored thereon at least one instruction, which is loaded and executed by a processor, to implement the user profile retrieving method according to any one of claims 1 to 11.
CN202010530924.8A 2020-06-11 2020-06-11 User characteristic obtaining method and device, computer equipment and storage medium Pending CN111695629A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010530924.8A CN111695629A (en) 2020-06-11 2020-06-11 User characteristic obtaining method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010530924.8A CN111695629A (en) 2020-06-11 2020-06-11 User characteristic obtaining method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111695629A true CN111695629A (en) 2020-09-22

Family

ID=72480395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010530924.8A Pending CN111695629A (en) 2020-06-11 2020-06-11 User characteristic obtaining method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111695629A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217706A (en) * 2020-12-02 2021-01-12 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
EP3975089A1 (en) * 2020-09-25 2022-03-30 Beijing Baidu Netcom Science And Technology Co. Ltd. Multi-model training method and device based on feature extraction, an electronic device, and a medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3975089A1 (en) * 2020-09-25 2022-03-30 Beijing Baidu Netcom Science And Technology Co. Ltd. Multi-model training method and device based on feature extraction, an electronic device, and a medium
CN112217706A (en) * 2020-12-02 2021-01-12 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
WO2022116725A1 (en) * 2020-12-02 2022-06-09 腾讯科技(深圳)有限公司 Data processing method, apparatus, device, and storage medium

Similar Documents

Publication Publication Date Title
CN112257876B (en) Federal learning method, apparatus, computer device and medium
CN110209952B (en) Information recommendation method, device, equipment and storage medium
CN111652678A (en) Article information display method, device, terminal, server and readable storage medium
CN109284445B (en) Network resource recommendation method and device, server and storage medium
CN111080443B (en) Block chain-based service processing method, device, equipment and storage medium
CN108270794B (en) Content distribution method, device and readable medium
CN111541907A (en) Article display method, apparatus, device and storage medium
CN111159153B (en) Service data verification method, device, computer equipment and storage medium
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN112581358B (en) Training method of image processing model, image processing method and device
CN111275122A (en) Label labeling method, device, equipment and readable storage medium
CN113395542A (en) Video generation method and device based on artificial intelligence, computer equipment and medium
CN113055724B (en) Live broadcast data processing method, device, server, terminal, medium and product
CN112989767A (en) Medical term labeling method, medical term mapping device and medical term mapping equipment
CN113762971A (en) Data encryption method and device, computer equipment and storage medium
CN112288553A (en) Article recommendation method, device, terminal and storage medium
CN110365501B (en) Method and device for group joining processing based on graphic code
CN111695629A (en) User characteristic obtaining method and device, computer equipment and storage medium
CN110929159A (en) Resource delivery method, device, equipment and medium
CN113515987B (en) Palmprint recognition method, palmprint recognition device, computer equipment and storage medium
CN113987326B (en) Resource recommendation method and device, computer equipment and medium
CN113763932A (en) Voice processing method and device, computer equipment and storage medium
CN114764480A (en) Group type identification method and device, computer equipment and medium
CN111652432A (en) Method and device for determining user attribute information, electronic equipment and storage medium
CN111768507A (en) Image fusion method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination