CN111797302A - Model processing method and device, storage medium and electronic equipment - Google Patents
Model processing method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN111797302A CN111797302A CN201910282133.5A CN201910282133A CN111797302A CN 111797302 A CN111797302 A CN 111797302A CN 201910282133 A CN201910282133 A CN 201910282133A CN 111797302 A CN111797302 A CN 111797302A
- Authority
- CN
- China
- Prior art keywords
- model
- model parameters
- common
- parameters
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 34
- 238000003860 storage Methods 0.000 title claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 43
- 238000000034 method Methods 0.000 claims description 45
- 238000012545 processing Methods 0.000 claims description 34
- 230000005477 standard model Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 15
- 238000004422 calculation algorithm Methods 0.000 description 17
- 230000006399 behavior Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000012935 Averaging Methods 0.000 description 9
- 230000007613 environmental effect Effects 0.000 description 9
- 238000013526 transfer learning Methods 0.000 description 9
- 238000000605 extraction Methods 0.000 description 8
- 230000008447 perception Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000001133 acceleration Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 230000010354 integration Effects 0.000 description 5
- 230000036772 blood pressure Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004140 cleaning Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000011430 maximum method Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013501 data transformation Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000005355 Hall effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/12—Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the application provides a model processing method, a device, a storage medium and an electronic device, wherein the model processing method comprises the following steps: acquiring first information, inputting the first information into a prediction model as a training sample for training, and obtaining model parameters of the trained prediction model; sending the model parameters to a server; receiving a common model parameter returned by the server, wherein the common model parameter is obtained by calculating the model parameter and model parameters corresponding to other users; and obtaining a second prediction model according to the common model parameters. The prediction accuracy and the generalization capability of the prediction model are remarkably improved, and meanwhile, the data privacy of the user can be well protected.
Description
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a model processing method and apparatus, a storage medium, and an electronic device.
Background
With the development of artificial intelligence, electronic devices such as smartphones have become more and more intelligent. The electronic device can provide various intelligent functions for the user according to the collected data.
In the related art, a prediction model in the electronic device performs prediction according to the acquired information, and then provides corresponding services for users. Such as recommending a corresponding application, etc. However, the prediction model in the related art has limited collected information, so that the accuracy of the learned algorithm is insufficient.
Disclosure of Invention
The embodiment of the application provides a model processing method and device, a storage medium and electronic equipment, which can improve the prediction accuracy of a prediction model.
The embodiment of the application provides a model processing method, which comprises the following steps:
acquiring first information, inputting the first information into a prediction model as a training sample for training, and obtaining a target model parameter of the trained first prediction model;
sending the target model parameters to a server;
receiving a common model parameter returned by the server, wherein the common model parameter is obtained by calculating the target model parameter and model parameters corresponding to other users;
and obtaining a second prediction model according to the common model parameters.
The embodiment of the present application further provides a model processing method, which includes:
obtaining a plurality of groups of model parameters corresponding to a plurality of users according to a group of model parameters corresponding to the prediction model of each user;
adjusting the multiple groups of model parameters to multiple groups of standard model parameters of the same standard;
calculating the multiple groups of standard model parameters to obtain a group of common model parameters;
and sending the common model parameters to the prediction model corresponding to each user so as to take the common model parameters as second model parameters of the prediction model corresponding to each user.
An embodiment of the present application further provides a model processing apparatus, which includes:
the model parameter first acquisition module is used for acquiring first information, inputting the first information into a prediction model as a training sample and training to obtain a target model parameter of the trained first prediction model;
the first sending module is used for sending the target model parameters to a server;
the receiving module is used for receiving the common model parameters returned by the server, wherein the common model parameters are obtained by calculating the target model parameters and the model parameters corresponding to other users;
and the processing module is used for obtaining a second prediction model according to the common model parameters.
An embodiment of the present application further provides a model processing apparatus, which includes:
the second model parameter acquisition module is used for acquiring a plurality of groups of model parameters corresponding to a plurality of users according to a group of model parameters of the prediction model corresponding to each user;
the adjusting module is used for adjusting the multiple groups of model parameters into multiple groups of standard model parameters of the same standard;
the shared model parameter acquisition module is used for calculating the multiple groups of standard model parameters to obtain a group of shared model parameters;
and the second sending module is used for sending the common model parameters to the prediction model corresponding to each user, and taking the common model parameters as second model parameters of the prediction model corresponding to each user.
Embodiments of the present application also provide a storage medium, on which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the steps of the above-mentioned model processing method.
An embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores a computer program, and the processor is configured to execute the steps of the model processing method by calling the computer program stored in the memory.
According to the model processing method, the model processing device, the storage medium and the electronic equipment, first information is obtained and input into a prediction model for training by taking the first information as a training sample, and target model parameters of the trained first prediction model are obtained; then sending the model parameters to a server; then receiving a common model parameter returned by the server, wherein the common model parameter is obtained by calculating the target model parameter and model parameters corresponding to other users; and finally, obtaining a second prediction model according to the common model parameters. By adopting the federal learning idea, model parameters of other users can be calculated in a coordinated manner on the premise of not uploading user data, the local terminal is helped to predict better, the prediction precision and the generalization capability of the model are remarkably improved, and meanwhile the data privacy of the user can be well protected.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic view of an application scenario of a model processing method according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a first method for processing a model according to an embodiment of the present disclosure.
Fig. 3 is a schematic flowchart of a second method for processing a model according to an embodiment of the present disclosure.
Fig. 4 is a third flowchart illustrating a model processing method according to an embodiment of the present application.
Fig. 5 is a schematic view of another application scenario of the model processing method according to the embodiment of the present application.
Fig. 6 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of an information processing apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a model processing method according to an embodiment of the present application. The model processing method is applied to the electronic equipment. A panoramic perception framework is arranged in the electronic equipment. The panorama sensing architecture is an integration of hardware and software for implementing a model processing method in an electronic device.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment and/or information in an external environment. The information-perceiving layer may comprise a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as a markov algorithm, a hidden dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a K-means clustering algorithm, a K-nearest neighbor algorithm, a conditional random field, a residual error network, a long-short term memory network, a convolutional neural network, and a cyclic neural network.
The embodiment of the application provides a model processing method, which can be applied to electronic equipment. The electronic device may be a smartphone, a tablet, a gaming device, an Augmented Reality (AR) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as a watch, glasses, a helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
Referring to fig. 2, fig. 2 is a first flowchart illustrating a model processing method according to an embodiment of the present disclosure. The model processing method comprises the following steps:
101, acquiring first information, inputting the first information into a prediction model as a training sample for training, and obtaining a target model parameter of the trained first prediction model.
The first information may be all information about the user. For example, the environment information of the user, the running information of the electronic equipment used by the user, and the user behavior information can be included. The environment information may include temperature, humidity, position, brightness, etc. of the environment, and the environment information may also include body information of the user, such as blood pressure, pulse, heart rate, etc. Specifically, the environmental information may be environmental information obtained by a sensor. For example, the environmental information acquired by at least one of a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, a blood pressure sensor, a pulse sensor, a heart rate sensor, and the like. The environmental information can also be through the current audio information that the microphone acquireed, can also be through the current image information that the camera module acquireed.
The operation information of the electronic device may include startup time, shutdown time, standby time, memory usage at each time point, main chip usage at each time point, current operation program information, background operation program information, operation duration of each program, download amount of each program, and the like.
The user behavior information may include action track information, browsing information, payment information, travel information, and the like of the user.
The first information may also include configuration information of the electronic device, user information stored within the electronic device, and the like. The user information comprises information of man-machine interaction such as identity information, personal hobbies, browsing records and personal collections of the user.
It should be noted that some of the first information may be in two or three types of environment information, electronic device operation information, and user behavior information at the same time.
After the first information is obtained, the first information may be input into the prediction model as a training sample for training, so as to obtain a target model parameter of the trained first prediction model.
For example, the first information includes travel information of the user, and specifically may include a living location, a time period at home, a time of departure, a travel vehicle, a time period using a vehicle, a working location, a staying time period, and the like. In some embodiments, a user travel related mode may also be defined, and 7 travel modes of a bus, a subway, driving, riding, walking, a high-speed rail, an airplane and the like may be defined. The prediction model can predict the travel mode which is most likely to be used by the user next, and sets the corresponding function according to the travel mode. If the subway is in the next most probable travel mode, the application content used on the subway can be pre-loaded, and specifically, the application content can include content of news application, video content, application content, subway payment application, and the like.
The electronic device of each user can locally adopt a deep learning framework to establish a prediction model, such as a travel mode identification model.
And 102, sending the target model parameters to a server.
And after the target model parameters of the trained first prediction model are obtained, the target model parameters are sent to a server. The server can be a pre-built remote server or a cloud server. The target model parameters are uploaded to the server instead of directly uploading the first information, so that the leakage of the first information is avoided, and the privacy of the user is better protected and the related legal regulations, such as European Union data privacy protection regulations, are better complied with.
And 103, receiving the common model parameters returned by the server, wherein the common model parameters are obtained by calculating the target model parameters and the model parameters corresponding to other users.
And receiving the common model parameters returned by the server, wherein the common model parameters are obtained by calculating the target model parameters and the model parameters corresponding to other users. The server receives the model parameters corresponding to the users, and then calculates according to the model parameters of the users to obtain the common model parameters.
The server can calculate multiple sets of standard model parameters by adopting various algorithms. For example, an averaging method, a maximum value method, or a median method is used. The averaging method is to assume that the parameter vector after each user is aligned is w _ u ═ (w _ u1, w _ u2, …, w _ un), where u represents the u-th user and n represents the number of parameters, and then the average value of all users in each parameter dimension is taken based on the computing method of the averaging method. The maximum method is similar to the average method except that the maximum in each parameter dimension is selected. The median method is also similar to the average method except that the median in each parameter dimension is chosen.
And 104, obtaining a second prediction model according to the common model parameters.
And the prediction model obtains a second prediction model according to the common model parameters returned by the server. Model parameters of other users are well calculated in the prediction model training process of a single user, and the model parameters of the other users are obtained according to first information of electronic equipment of the other users, namely data knowledge and behavior habits of the other users are calculated, so that the recognition accuracy and generalization capability of the prediction model can be remarkably improved.
In the related technology, it is difficult to protect the privacy of user data and to perform calculation, learning and training of data when data is different among different electronic devices, so that the accuracy and adaptability of the prediction algorithm are greatly limited. The embodiment is based on the idea of federal transfer learning, can realize stronger generalization ability and higher precision, and can protect the privacy of user data. Specifically, by adopting the federal learning idea, the first information of other users can be calculated in a coordinated manner on the premise of not uploading the first information of the user, the local terminal is helped to predict better, the precision and generalization capability of a prediction model are obviously improved, and the data privacy of the user can be well protected; through transfer learning, model parameters of the prediction model are aligned on the premise that the first information is different, the prediction model is guaranteed to have stronger robustness, the situation that the first information of the user is the same can be processed, the scene that the first information of the user is different can also be processed, and the application range of the built prediction mode is greatly expanded.
Referring to fig. 3, fig. 3 is a first flowchart illustrating a model processing method according to an embodiment of the present disclosure. The model processing method comprises the following steps:
and 201, acquiring first information, inputting the first information into a prediction model as a training sample, and training to obtain a target model parameter of the trained first prediction model.
The first information may be all information about the user. For example, the environment information of the user, the running information of the electronic equipment used by the user, and the user behavior information can be included. The environment information may include temperature, humidity, position, brightness, etc. of the environment, and the environment information may also include body information of the user, such as blood pressure, pulse, heart rate, etc. Specifically, the environmental information may be environmental information obtained by a sensor. For example, the environmental information acquired by at least one of a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, a blood pressure sensor, a pulse sensor, a heart rate sensor, and the like. The environmental information can also be through the current audio information that the microphone acquireed, can also be through the current image information that the camera module acquireed.
The operation information of the electronic device may include startup time, shutdown time, standby time, memory usage at each time point, main chip usage at each time point, current operation program information, background operation program information, operation duration of each program, download amount of each program, and the like.
The user behavior information may include action track information, browsing information, payment information, travel information, and the like of the user.
The first information may also include configuration information of the electronic device, user information stored within the electronic device, and the like. The user information comprises information of man-machine interaction such as identity information, personal hobbies, browsing records and personal collections of the user.
It should be noted that some of the first information may be in two or three types of environment information, electronic device operation information, and user behavior information at the same time.
After the first information is obtained, the first information may be input into the prediction model as a training sample for training, so as to obtain a target model parameter of the trained first prediction model.
For example, the first information includes travel information of the user, and specifically may include a living location, a time period at home, a time of departure, a travel vehicle, a time period using a vehicle, a working location, a staying time period, and the like. In some embodiments, a user travel related mode may also be defined, and 7 travel modes of a bus, a subway, driving, riding, walking, a high-speed rail, an airplane and the like may be defined. The prediction model can predict the travel mode which is most likely to be used by the user next, and sets the corresponding function according to the travel mode. If the subway is in the next most probable travel mode, the application content used on the subway can be pre-loaded, and specifically, the application content can include content of news application, video content, application content, subway payment application, and the like.
The electronic device of each user can locally adopt a deep learning framework to establish a prediction model, such as a travel mode identification model.
202, sending the target model parameters to a server.
And after the target model parameters of the trained first prediction model are obtained, the target model parameters are sent to a server. The server can be a pre-built remote server or a cloud server. The target model parameters are uploaded to the server instead of directly uploading the first information, so that the leakage of the first information is avoided, and the privacy of the user is better protected and the related legal regulations, such as European Union data privacy protection regulations, are better complied with.
And 203, receiving the common model parameters returned by the server, wherein the common model parameters are obtained by calculating the target model parameters and the model parameters corresponding to other users.
And receiving the common model parameters returned by the server, wherein the common model parameters are obtained by calculating the target model parameters and the model parameters corresponding to other users. The server receives the model parameters corresponding to the users, and then calculates according to the model parameters of the users to obtain the common model parameters.
The server can calculate multiple sets of standard model parameters by adopting various algorithms. For example, an averaging method, a maximum value method, or a median method is used. The averaging method is to assume that the parameter vector after each user is aligned is w _ u ═ (w _ u1, w _ u2, …, w _ un), where u represents the u-th user and n represents the number of parameters, and then the average value of all users in each parameter dimension is taken based on the computing method of the averaging method. The maximum method is similar to the average method except that the maximum in each parameter dimension is selected. The median method is also similar to the average method except that the median in each parameter dimension is chosen.
And 204, obtaining the matching degree of the target model parameters and the common model parameters.
And calculating the matching degree of the target model parameter and the common model parameter, and calculating the matching degree of the target model parameter and the common model parameter.
And 205, when the matching degree is smaller than the preset matching degree, retraining the prediction model by using the common model parameters and the first information to obtain a second prediction model.
When the matching degree is smaller than the preset matching degree, the difference between the target model parameter obtained by the training of the prediction model and the common model parameter returned by the server is large, the common model parameter is directly used as the second model parameter of the prediction model, and a large error may exist. For example, the prediction model using the common model parameter is retrained, or the common model parameter and the target model parameter are calculated, and the intermediate value between the common model parameter and the target model parameter is obtained as the model parameter of the prediction mode.
And 206, when the matching degree is greater than the preset matching degree, obtaining a second prediction model according to the common model parameters.
And obtaining a second prediction model according to the common model parameters returned by the server according to the prediction model. Model parameters of other users are well calculated in the prediction model training process of a single user, and the model parameters of the other users are obtained according to first information of electronic equipment of the other users, namely data knowledge and behavior habits of the other users are calculated, so that the recognition accuracy and generalization capability of the prediction model can be remarkably improved.
In some embodiments, deriving the second predictive model from the common model parameters comprises:
the prediction model using the common model parameters will be used as the second prediction model.
The second prediction model may be a prediction model using the common model parameters, or may be a prediction model using the common model parameters.
In some embodiments, deriving the second predictive model from the common model parameters comprises:
adjusting the common model parameters according to the target model parameters to obtain second model parameters;
the prediction model using the second model parameters will be the second prediction model.
The common model parameters are directly used as model parameters of the prediction model, and deviation may occur, and at this time, the common model parameters can be adjusted according to target model parameters obtained by previous training of the prediction model, so that second model parameters are obtained.
In some embodiments, deriving the second predictive model from the common model parameters comprises:
training according to the first information by using a prediction model of the common model parameter to adjust the common model parameter to obtain a second model parameter;
the prediction model using the second model parameters will be the second prediction model.
The common model parameters are directly used as model parameters of the prediction model, deviation is possible, at this time, the prediction model of the common model parameters can be used for retraining according to the first information so as to adjust the common model parameters to obtain second model parameters, wherein the retraining can be carried out by using smaller samples or by using all samples; finally, the prediction model using the second model parameters is used as the second prediction model.
In the related technology, it is difficult to protect the privacy of user data and to perform calculation, learning and training of data when data is different among different electronic devices, so that the accuracy and adaptability of the prediction algorithm are greatly limited. The embodiment is based on the idea of federal transfer learning, can realize stronger generalization ability and higher precision, and can protect the privacy of user data. Specifically, by adopting the federal learning idea, the first information of other users can be calculated in a coordinated manner on the premise of not uploading the first information of the user, the local terminal is helped to predict better, the precision and generalization capability of a prediction model are obviously improved, and the data privacy of the user can be well protected; through transfer learning, model parameters of the prediction model are aligned on the premise that the first information is different, the prediction model is guaranteed to have stronger robustness, the situation that the first information of the user is the same can be processed, the scene that the first information of the user is different can also be processed, and the application range of the built prediction mode is greatly expanded.
Referring to fig. 4, fig. 4 is a third schematic flow chart of a model processing method according to an embodiment of the present disclosure. The model processing method comprises the following steps:
301, obtaining multiple sets of model parameters corresponding to multiple users according to a set of model parameters corresponding to the prediction model of each user.
The prediction model corresponding to each user is trained according to the first information of the corresponding user to obtain a trained model and a group of model parameters, and multiple groups of model parameters exist corresponding to multiple users.
302, the plurality of sets of model parameters are adjusted to a plurality of sets of standard model parameters of the same standard.
After multiple sets of model parameters corresponding to multiple users are obtained, the multiple sets of model parameters are adjusted to multiple sets of standard model parameters of the same standard. Specifically, a transfer learning mode is adopted to align the model parameters uploaded by each user, so that the parameters of all users are in the same space. For example, the prediction model of each user is a travel prediction model, and each travel prediction model performs travel prediction based on the first information of the corresponding electronic device.
For example, the first information includes travel information of the user, and specifically may include a living location, a time period at home, a time of departure, a travel vehicle, a time period using a vehicle, a working location, a staying time period, and the like. In some embodiments, a user travel related mode may also be defined, and 7 travel modes of a bus, a subway, driving, riding, walking, a high-speed rail, an airplane and the like may be defined. The prediction model can predict the travel mode which is most likely to be used by the user next, and sets the corresponding function according to the travel mode. If the subway is in the next most probable travel mode, the application content used on the subway can be pre-loaded, and specifically, the application content can include content of news application, video content, application content, subway payment application, and the like.
Thus, the model parameters corresponding to each user are identified in the same space according to the same standard (if the model parameters appear).
303, calculating the plurality of groups of standard model parameters to obtain a group of common model parameters.
Various algorithms may be used to calculate the sets of standard model parameters. For example, an averaging method, a maximum value method, or a median method is used. The averaging method is to assume that the parameter vector after each user is aligned is w _ u ═ (w _ u1, w _ u2, …, w _ un), where u represents the u-th user and n represents the number of parameters, and then the average value of all users in each parameter dimension is taken based on the computing method of the averaging method. The maximum method is similar to the average method except that the maximum in each parameter dimension is selected. The median method is also similar to the average method except that the median in each parameter dimension is chosen.
And 304, sending the common model parameters to the prediction model corresponding to each user so as to use the common model parameters as second model parameters of the prediction model corresponding to each user.
And after the common model parameters are obtained, the common model parameters are sent to the prediction model corresponding to each user, and after the common model parameters are obtained by the prediction model, second model parameters are obtained according to the common model parameters.
In some embodiments, before the common model parameter is used as the second model parameter of the prediction model corresponding to each user, the matching degree between the model parameter and the common model parameter may be obtained.
And when the matching degree is smaller than the preset matching degree, retraining the prediction model by using the common model parameters and the first information.
And when the matching degree is greater than the preset matching degree, obtaining a second prediction model according to the common model parameters.
When the matching degree is smaller than the preset matching degree, the difference between the model parameters obtained by the training of the prediction model and the common model parameters returned by the server is large, the common model parameters are directly used as the second model parameters of the prediction model, and the prediction model can be retrained by using the common model parameters and the first information for the accuracy of the prediction model. For example, the prediction model using the common model parameter is retrained, or the common model parameter and the model parameter are calculated, and the intermediate value between the common model parameter and the model parameter is obtained as the model parameter of the prediction mode.
And when the matching degree is greater than the preset matching degree, obtaining a second prediction model according to the common model parameters returned by the server according to the prediction model. Model parameters of other users are well calculated in the prediction model training process of a single user, and the model parameters of the other users are obtained according to first information of electronic equipment of the other users, namely data knowledge and behavior habits of the other users are calculated, so that the recognition accuracy and generalization capability of the prediction model can be remarkably improved.
In some embodiments, deriving the second predictive model from the common model parameters may include:
the prediction model using the common model parameters will be used as the second prediction model.
The second prediction model may be a prediction model using the common model parameters, or may be a prediction model using the common model parameters.
In some embodiments, deriving the second predictive model from the common model parameters may include:
adjusting the common model parameters according to the model parameters to obtain second model parameters;
the prediction model using the second model parameters will be the second prediction model.
The common model parameters are directly used as the final parameters of the prediction model, and deviation may occur, and at this time, the common model parameters can be adjusted according to the model parameters obtained by previous training of the prediction model, so that the final model parameters are obtained.
In some embodiments, deriving the second predictive model from the common model parameters comprises:
training according to the first information by using a prediction model of the common model parameter to adjust the common model parameter to obtain a second model parameter;
the prediction model using the second model parameters will be the second prediction model.
The common model parameters are directly used as the final parameters of the prediction model, and deviation is possible, at this time, the prediction model of the common model parameters can be used for retraining according to the first information so as to adjust the common model parameters to obtain second model parameters, wherein the retraining can be carried out by using smaller samples or by using all samples; finally, the prediction model using the second model parameters is used as the second prediction model.
In the related technology, it is difficult to protect the privacy of user data and to perform calculation, learning and training of data when data is different among different electronic devices, so that the accuracy and adaptability of the prediction algorithm are greatly limited. The embodiment is based on the idea of federal transfer learning, can realize stronger generalization ability and higher precision, and can protect the privacy of user data. Specifically, by adopting the federal learning idea, the first information of other users can be calculated in a coordinated manner on the premise of not uploading the first information of the user, the local terminal is helped to predict better, the precision and generalization capability of a prediction model are obviously improved, and the data privacy of the user can be well protected; through transfer learning, model parameters of the prediction model are aligned on the premise that the first information is different, the prediction model is guaranteed to have stronger robustness, the situation that the first information of the user is the same can be processed, the scene that the first information of the user is different can also be processed, and the application range of the built prediction mode is greatly expanded.
Referring to fig. 5, fig. 5 is a schematic view of a scenario of a model processing method according to an embodiment of the present disclosure. The method comprises the steps of firstly obtaining first information of electronic equipment of a user, then inputting the obtained first information into a prediction model for training to obtain a trained first prediction model and target model parameters, then uploading the target model parameters to a server, and aligning the uploaded target model parameters with model parameters uploaded by other users by the server based on transfer learning, so that the problem that the model parameters are difficult to directly fuse due to inconsistent data is avoided. And then, calculating the aligned model parameters to obtain common model parameters, and then sending the common model parameters to the prediction model for learning again to obtain a second prediction model. The electronic device may perform prediction using the second prediction model, and perform function control according to the prediction result. For example, if the travel information of the user is predicted and the prediction result is that the user will go out on a high-speed rail, ticket purchasing software, map software, car renting software, taxi taking software, train number query software and the like can be recommended.
In some embodiments, the model processing method may specifically include: firstly, obtaining information of electronic equipment of a user (specifically including electronic equipment operation information, user behavior information, information obtained by various sensors, electronic equipment state information, electronic equipment display content information, electronic equipment uploading and downloading information and the like) through an information perception layer, then processing the information of the electronic equipment (such as invalid data deletion and the like) through a data processing layer, then extracting required first information from the information processed by the data processing layer through a characteristic extraction layer (the first information can be referred to the description of the embodiment specifically), then inputting the first information into a scene modeling layer, wherein the scene modeling layer comprises a pre-stored prediction model, the prediction model of the scene modeling layer is trained according to the first information to obtain the trained first prediction model and target model parameters, and then uploading the target model parameters to a server through a transmission module (such as a radio frequency module), the server aligns the uploaded target model parameters with the model parameters uploaded by other users based on transfer learning, and the problem that the model parameters are difficult to directly fuse due to inconsistent data is solved. And then calculating the aligned model parameters to obtain shared model parameters, then sending the shared model parameters back to the electronic equipment, inputting the shared model parameters into the scene modeling layer after the electronic equipment receives the shared model parameters, and replacing the parameters of the prediction model in the scene modeling layer with the shared model parameters to obtain a second prediction model. The prediction module can directly replace the original model parameters with the common model parameters, and can also perform training and learning to obtain a second prediction model after replacing the original model parameters with the common model parameters. And finally, the intelligent service layer can predict by utilizing the second prediction model and perform function control according to the prediction result. For example, if the travel information of the user is predicted and the prediction result is that the user will go out on a high-speed rail, ticket purchasing software, map software, car renting software, taxi taking software, train number query software and the like can be recommended.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a model processing apparatus according to an embodiment of the present disclosure. The model processing apparatus 400 includes a model parameter first obtaining module 401, a first transmitting module 402, a receiving module 403, and a processing module 404.
The first model parameter obtaining module 401 is configured to obtain first information, input the first information into the prediction model as a training sample, and train the first information to obtain a target model parameter of the trained first prediction model.
A first sending module 402, configured to send the target model parameters to a server.
The receiving module 403 is configured to receive a common model parameter returned by the server, where the common model parameter is obtained by calculating a target model parameter and a model parameter corresponding to another user.
And a processing module 404, configured to obtain a second prediction model according to the common model parameter.
The model processing apparatus of the present embodiment may be provided in an electronic device used by a user.
In some embodiments, the processing module 404 is further configured to obtain a matching degree between the target model parameter and the common model parameter; when the matching degree is smaller than the preset matching degree, retraining the prediction model by using the common model parameter and the first information; and when the matching degree is greater than the preset matching degree, obtaining a second prediction model according to the common model parameters.
In some embodiments, the processing module 404 is further configured to use the prediction model using the common model parameters as the second prediction model.
In some embodiments, the processing module 404 is further configured to adjust the common model parameter according to the target model parameter to obtain a second model parameter; the prediction model using the second model parameters will be the second prediction model.
In some embodiments, the processing module 404 is further configured to train according to the first information using a prediction model of the common model parameter to adjust the common model parameter to obtain a second model parameter; the prediction model using the second model parameters will be the second prediction model.
Referring to fig. 7, fig. 7 is another schematic structural diagram of a model processing apparatus according to an embodiment of the present disclosure. The model processing apparatus 500 includes a second model parameter obtaining module 501, an adjusting module 502, a common model parameter obtaining module 503, and a second sending module 504.
A second model parameter obtaining module 501, configured to obtain multiple sets of model parameters corresponding to multiple users according to a set of model parameters of the prediction model corresponding to each user;
an adjusting module 502, configured to adjust multiple sets of model parameters to multiple sets of standard model parameters of the same standard;
a common model parameter obtaining module 503, configured to calculate multiple sets of standard model parameters to obtain a set of common model parameters;
a second sending module 504, configured to send the common model parameters to the prediction model corresponding to each user, so as to use the common model parameters as the second model parameters of the prediction model corresponding to each user.
Each module of this embodiment may be used in combination with each module of the above embodiments. For example, the second obtaining module 501 receives the model parameters sent by the first sending module 402. The receiving module 403 receives the common model parameters and the like transmitted by the second transmitting module 504.
The model processing means of the present embodiment may be provided in a server.
Referring to fig. 8, fig. 8 is a schematic view illustrating a first structure of an electronic device 600 according to an embodiment of the disclosure. The electronic device 600 comprises, among other things, a processor 601 and a memory 602. The processor 601 is electrically connected to the memory 602.
The processor 601 is a control center of the electronic device 600, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 601 in the electronic device 600 loads instructions corresponding to one or more processes of the computer program into the memory 602 according to the following steps, and the processor 601 runs the computer program stored in the memory 602, thereby implementing various functions:
acquiring first information, inputting the first information into a prediction model as a training sample for training, and obtaining a target model parameter of the trained first prediction model;
sending the target model parameters to a server;
receiving a common model parameter returned by the server, wherein the common model parameter is obtained by calculating a target model parameter and model parameters corresponding to other users;
and obtaining a second prediction model according to the common model parameters.
In some embodiments, when deriving the second predictive model from the common model parameters, processor 601 performs the steps of:
obtaining the matching degree of the target model parameters and the common model parameters;
when the matching degree is smaller than the preset matching degree, retraining the prediction model by using the common model parameter and the first information;
and when the matching degree is greater than the preset matching degree, obtaining a second prediction model according to the common model parameters.
In some embodiments, when deriving the second predictive model from the common model parameters, processor 601 performs the steps of:
the prediction model using the common model parameters will be used as the second prediction model.
In some embodiments, when deriving the second predictive model from the common model parameters, processor 601 performs the steps of:
adjusting the common model parameters according to the target model parameters to obtain second model parameters;
the prediction model using the second model parameters will be the second prediction model.
In some embodiments, when deriving the second predictive model from the common model parameters, processor 601 performs the steps of:
training according to the first information by using a prediction model of the common model parameter to adjust the common model parameter to obtain a second model parameter;
the prediction model using the second model parameters will be the second prediction model.
In some embodiments, processor 601 performs the following steps:
obtaining a plurality of groups of model parameters corresponding to a plurality of users according to a group of model parameters corresponding to the prediction model of each user;
adjusting the multiple groups of model parameters to multiple groups of standard model parameters of the same standard;
calculating a plurality of groups of standard model parameters to obtain a group of common model parameters;
and sending the common model parameters to the prediction model corresponding to each user so as to take the common model parameters as second model parameters of the prediction model corresponding to each user.
In some embodiments, please refer to fig. 9, and fig. 9 is a second structural diagram of an electronic device 600 according to an embodiment of the present disclosure.
Wherein, electronic device 600 further includes: a display screen 603, a control circuit 604, an input unit 605, a sensor 606, and a power supply 607. The processor 601 is electrically connected to the display screen 603, the control circuit 604, the input unit 605, the sensor 606 and the power supply 607.
The display screen 603 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 604 is electrically connected to the display screen 603, and is configured to control the display screen 603 to display information.
The input unit 605 may be used to receive input numbers, character information, or user characteristic information (e.g., a fingerprint), and generate a keyboard, mouse, joystick, optical, or trackball signal input related to user setting and function control. The input unit 605 may include a fingerprint recognition module.
The sensor 606 is used to collect information of the electronic device itself or information of the user or external environment information. For example, the sensor 606 may include a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, a heart rate sensor, and the like.
The power supply 607 is used to power the various components of the electronic device 600. In some embodiments, the power supply 607 may be logically coupled to the processor 601 through a power management system, such that the power management system may manage charging, discharging, and power consumption management functions.
Although not shown in fig. 9, the electronic device 600 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
As can be seen from the above, an embodiment of the present application provides an electronic device, where a processor in the electronic device performs the following steps: acquiring first information, inputting the first information into a prediction model as a training sample for training, and obtaining model parameters of the trained prediction model; sending the model parameters to a server; receiving a shared model parameter returned by the server, wherein the shared model parameter is obtained by calculating the model parameter and model parameters corresponding to other users; and obtaining a second prediction model according to the common model parameters.
The embodiment of the present application further provides a storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer executes the model processing method according to any one of the above embodiments.
For example, in some embodiments, when the computer program is run on a computer, the computer performs the steps of:
acquiring first information, inputting the first information into a prediction model as a training sample for training, and obtaining a target model parameter of the trained first prediction model;
sending the target model parameters to a server;
receiving a common model parameter returned by the server, wherein the common model parameter is obtained by calculating a target model parameter and model parameters corresponding to other users;
and obtaining a second prediction model according to the common model parameters.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The model processing method, the model processing device, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A method of model processing, comprising:
acquiring first information, inputting the first information into a prediction model as a training sample for training, and obtaining a target model parameter of the trained first prediction model;
sending the target model parameters to a server;
receiving a common model parameter returned by the server, wherein the common model parameter is obtained by calculating the target model parameter and model parameters corresponding to other users;
and obtaining a second prediction model according to the common model parameters.
2. The model processing method of claim 1, wherein deriving a second predictive model from the common model parameters comprises:
obtaining the matching degree of the target model parameters and the common model parameters;
when the matching degree is smaller than a preset matching degree, retraining the prediction model by using the common model parameter and the first information;
and when the matching degree is greater than the preset matching degree, obtaining a second prediction model according to the common model parameters.
3. The model processing method according to claim 1 or 2, wherein deriving a second predictive model from the common model parameters comprises:
using the prediction model using the common model parameters as a second prediction model.
4. The model processing method of claim 1, wherein deriving a second predictive model from the common model parameters comprises:
adjusting the common model parameter according to the target model parameter to obtain a second model parameter;
using the prediction model using the second model parameters as a second prediction model.
5. The model processing method of claim 1, wherein deriving a second predictive model from the common model parameters comprises:
training the prediction model of the common model parameter according to the first information to adjust the common model parameter to obtain a second model parameter;
using the prediction model using the second model parameters as a second prediction model.
6. A method of model processing, comprising:
obtaining a plurality of groups of model parameters corresponding to a plurality of users according to a group of model parameters corresponding to the prediction model of each user;
adjusting the multiple groups of model parameters to multiple groups of standard model parameters of the same standard;
calculating the multiple groups of standard model parameters to obtain a group of common model parameters;
and sending the common model parameters to the prediction model corresponding to each user so as to take the common model parameters as second model parameters of the prediction model corresponding to each user.
7. A model processing apparatus, comprising:
the model parameter first acquisition module is used for acquiring first information, inputting the first information into a prediction model as a training sample and training to obtain a target model parameter of the trained first prediction model;
the first sending module is used for sending the target model parameters to a server;
the receiving module is used for receiving the common model parameters returned by the server, wherein the common model parameters are obtained by calculating the target model parameters and the model parameters corresponding to other users;
and the processing module is used for obtaining a second prediction model according to the common model parameters.
8. A model processing apparatus, comprising:
the second model parameter acquisition module is used for acquiring a plurality of groups of model parameters corresponding to a plurality of users according to a group of model parameters of the prediction model corresponding to each user;
the adjusting module is used for adjusting the multiple groups of model parameters into multiple groups of standard model parameters of the same standard;
the shared model parameter acquisition module is used for calculating the multiple groups of standard model parameters to obtain a group of shared model parameters;
and the second sending module is used for sending the common model parameters to the prediction model corresponding to each user, and taking the common model parameters as second model parameters of the prediction model corresponding to each user.
9. A storage medium having stored thereon a computer program, characterized in that, when the computer program is run on a computer, it causes the computer to execute the model processing method according to any one of claims 1 to 6.
10. An electronic device, characterized in that the electronic device comprises a processor and a memory, wherein a computer program is stored in the memory, and the processor is configured to execute the model processing method according to any one of claims 1 to 6 by calling the computer program stored in the memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910282133.5A CN111797302A (en) | 2019-04-09 | 2019-04-09 | Model processing method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910282133.5A CN111797302A (en) | 2019-04-09 | 2019-04-09 | Model processing method and device, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111797302A true CN111797302A (en) | 2020-10-20 |
Family
ID=72805319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910282133.5A Pending CN111797302A (en) | 2019-04-09 | 2019-04-09 | Model processing method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111797302A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112232437A (en) * | 2020-11-04 | 2021-01-15 | 深圳技术大学 | Internet of things terminal data analysis method and system |
CN112613686A (en) * | 2020-12-31 | 2021-04-06 | 广州兴森快捷电路科技有限公司 | Process capability prediction method, system, electronic device and storage medium |
CN112926126A (en) * | 2021-03-31 | 2021-06-08 | 南京信息工程大学 | Federal learning method based on Markov random field |
CN114491943A (en) * | 2021-12-23 | 2022-05-13 | 北京达佳互联信息技术有限公司 | Information processing method, temperature prediction model training method and device and electronic equipment |
WO2023169425A1 (en) * | 2022-03-07 | 2023-09-14 | 维沃移动通信有限公司 | Data processing method in communication network, and network-side device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109189825A (en) * | 2018-08-10 | 2019-01-11 | 深圳前海微众银行股份有限公司 | Lateral data cutting federation learning model building method, server and medium |
US20190034658A1 (en) * | 2017-07-28 | 2019-01-31 | Alibaba Group Holding Limited | Data secruity enhancement by model training |
CN109389412A (en) * | 2017-08-02 | 2019-02-26 | 阿里巴巴集团控股有限公司 | A kind of method and device of training pattern |
CN109492420A (en) * | 2018-12-28 | 2019-03-19 | 深圳前海微众银行股份有限公司 | Model parameter training method, terminal, system and medium based on federation's study |
-
2019
- 2019-04-09 CN CN201910282133.5A patent/CN111797302A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190034658A1 (en) * | 2017-07-28 | 2019-01-31 | Alibaba Group Holding Limited | Data secruity enhancement by model training |
CN109389412A (en) * | 2017-08-02 | 2019-02-26 | 阿里巴巴集团控股有限公司 | A kind of method and device of training pattern |
CN109189825A (en) * | 2018-08-10 | 2019-01-11 | 深圳前海微众银行股份有限公司 | Lateral data cutting federation learning model building method, server and medium |
CN109492420A (en) * | 2018-12-28 | 2019-03-19 | 深圳前海微众银行股份有限公司 | Model parameter training method, terminal, system and medium based on federation's study |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112232437A (en) * | 2020-11-04 | 2021-01-15 | 深圳技术大学 | Internet of things terminal data analysis method and system |
CN112613686A (en) * | 2020-12-31 | 2021-04-06 | 广州兴森快捷电路科技有限公司 | Process capability prediction method, system, electronic device and storage medium |
CN112926126A (en) * | 2021-03-31 | 2021-06-08 | 南京信息工程大学 | Federal learning method based on Markov random field |
CN112926126B (en) * | 2021-03-31 | 2023-04-25 | 南京信息工程大学 | Federal learning method based on Markov random field |
CN114491943A (en) * | 2021-12-23 | 2022-05-13 | 北京达佳互联信息技术有限公司 | Information processing method, temperature prediction model training method and device and electronic equipment |
WO2023169425A1 (en) * | 2022-03-07 | 2023-09-14 | 维沃移动通信有限公司 | Data processing method in communication network, and network-side device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111797302A (en) | Model processing method and device, storage medium and electronic equipment | |
CN111797858A (en) | Model training method, behavior prediction method, device, storage medium and equipment | |
CN111798018B (en) | Behavior prediction method and device, storage medium and electronic equipment | |
CN111798259A (en) | Application recommendation method and device, storage medium and electronic equipment | |
CN111797861A (en) | Information processing method, information processing apparatus, storage medium, and electronic device | |
CN111797854B (en) | Scene model building method and device, storage medium and electronic equipment | |
CN111797851A (en) | Feature extraction method and device, storage medium and electronic equipment | |
CN111798019B (en) | Intention prediction method, intention prediction device, storage medium and electronic equipment | |
CN111796925A (en) | Method and device for screening algorithm model, storage medium and electronic equipment | |
CN111797873A (en) | Scene recognition method and device, storage medium and electronic equipment | |
CN111797867A (en) | System resource optimization method and device, storage medium and electronic equipment | |
CN111798367A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN111797856B (en) | Modeling method and device, storage medium and electronic equipment | |
CN111797874B (en) | Behavior prediction method and device, storage medium and electronic equipment | |
CN111797303A (en) | Information processing method, information processing apparatus, storage medium, and electronic device | |
CN111797655A (en) | User activity identification method and device, storage medium and electronic equipment | |
CN111796663B (en) | Scene recognition model updating method and device, storage medium and electronic equipment | |
CN111797289A (en) | Model processing method and device, storage medium and electronic equipment | |
CN111797869A (en) | Model training method and device, storage medium and electronic equipment | |
CN111797656B (en) | Face key point detection method and device, storage medium and electronic equipment | |
CN111797875B (en) | Scene modeling method and device, storage medium and electronic equipment | |
CN111796916A (en) | Data distribution method, device, storage medium and server | |
CN111796924A (en) | Service processing method, device, storage medium and electronic equipment | |
CN111797868B (en) | Scene recognition model modeling method and device, storage medium and electronic equipment | |
CN111797127A (en) | Time series data segmentation method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |