CN111736712A - Input information prediction method, system, server and electronic equipment - Google Patents

Input information prediction method, system, server and electronic equipment Download PDF

Info

Publication number
CN111736712A
CN111736712A CN202010589833.1A CN202010589833A CN111736712A CN 111736712 A CN111736712 A CN 111736712A CN 202010589833 A CN202010589833 A CN 202010589833A CN 111736712 A CN111736712 A CN 111736712A
Authority
CN
China
Prior art keywords
input information
private
server
target parameter
common
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010589833.1A
Other languages
Chinese (zh)
Other versions
CN111736712B (en
Inventor
马平烁
董大祥
敬清贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010589833.1A priority Critical patent/CN111736712B/en
Publication of CN111736712A publication Critical patent/CN111736712A/en
Application granted granted Critical
Publication of CN111736712B publication Critical patent/CN111736712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a method, a system, a server, terminal equipment, electronic equipment and a storage medium for predicting input information, and relates to the technical field of deep learning and cloud computing. The specific implementation scheme is as follows: the method comprises the steps of obtaining initial input information of the terminal equipment, generating target input information corresponding to the initial input information according to a pre-trained prediction model, wherein the prediction model is generated based on target parameter gradients, the target parameter gradients are generated based on the terminal equipment and private input information of the terminal equipment and the pre-set initial model, training of relevant parameters of the prediction model is achieved through the terminal equipment based on the private input information, individuation of training can be improved, personalized prediction and targeted prediction of the prediction model are improved, the technical effect of accuracy and reliability of prediction is improved, and input experience of a user is improved.

Description

Input information prediction method, system, server and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of deep learning and cloud computing, and specifically relates to a method, a system, a server, terminal equipment, electronic equipment and a storage medium for predicting input information.
Background
With the popularization of the application of the terminal equipment, the improvement of the intellectualization of the input method applied to the terminal equipment becomes a problem to be solved urgently.
In the prior art, a server may collect input information of each terminal device, train the input information based on a neural network model, generate a prediction model for predicting the input information, and predict the input information.
However, in the process of implementing the present application, the inventors found that at least the following problems exist: the server not only trains the prediction model, but also predicts the input information, which may cause problems of large consumption of computing resources of the server.
Disclosure of Invention
Provided are a prediction method, system, server, terminal device, electronic device, and storage medium of input information for reducing consumption of computing resources.
According to a first aspect, there is provided a method for predicting input information, the method being applied to a server, and comprising:
acquiring initial input information of terminal equipment;
and generating target input information corresponding to the initial input information according to a pre-trained prediction model, wherein the prediction model is generated based on a target parameter gradient, and the target parameter gradient is generated based on the private input information of the terminal equipment and a pre-set initial model of the terminal equipment.
According to a second aspect, an embodiment of the present application provides a method for predicting input information, where the method is applied to a terminal device, and includes:
receiving an initial model sent by a server;
generating a target parameter gradient according to the initial model and the private input information of the terminal equipment;
and sending the target parameter gradient to the server.
According to a third aspect, an embodiment of the present application provides a server, including:
the first acquisition module is used for acquiring initial input information of the terminal equipment;
and the prediction module is used for generating target input information corresponding to the initial input information according to a pre-trained prediction model, wherein the prediction model is generated based on a target parameter gradient, and the target parameter gradient is generated based on the private input information of the terminal equipment and a pre-set initial model.
According to a fourth aspect, an embodiment of the present application provides a terminal device, including:
the first receiving module is used for receiving the initial model sent by the server;
the second generation module is used for generating a target parameter gradient according to the initial model and the private input information of the terminal equipment;
and the first sending module is used for sending the target parameter gradient to the server.
According to a fifth aspect, an embodiment of the present application provides a prediction system for input information, including:
the server as in any one of the above embodiments;
the terminal device as in any one of the above embodiments.
According to a sixth aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method as in any one of the embodiments above.
According to a seventh aspect, the present application provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method according to any one of the above embodiments.
According to the prediction model, the server generates a target parameter gradient based on feedback of the terminal equipment, namely the prediction process can be realized by the server, the training process can be realized by the terminal equipment, and the training process can be specifically realized by a technology that the terminal equipment realizes based on private input information, on one hand, by adopting the modes of server prediction and terminal equipment training, the defect of large computing resources caused by the fact that the server performs training and prediction is carried out can be reduced, so that split type training and prediction are realized, and the technical effects of flexibility and sufficiency of resource utilization are further realized; on the other hand, by adopting the modes of server prediction and terminal equipment training, the defect that the memory space of the terminal equipment occupies a large space due to the fact that the server performs training and the terminal performs prediction can be reduced, so that the memory space of the terminal equipment is saved, the terminal equipment is prevented from being blocked, and the technical effect of improving the prediction efficiency is achieved; on the other hand, the terminal equipment realizes training based on private input information, and can improve the individuation of training, thereby improving the individuation prediction and the pertinence prediction of a prediction model, further improving the technical effects of the accuracy and the reliability of the prediction, and improving the input experience of a user.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic diagram of an application scenario of a prediction method of input information according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for predicting input information according to an embodiment of the present disclosure;
fig. 3 is an interface schematic diagram of a terminal device according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for predicting input information according to another embodiment of the present disclosure;
FIG. 5 is an interaction diagram of an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating the generation of a gradient of a target parameter according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating a method for predicting input information according to another embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a server according to one embodiment of the present application;
FIG. 9 is a schematic diagram of a server according to another embodiment of the present application;
FIG. 10 is a schematic diagram of a terminal device according to an embodiment of the present application;
fig. 11 is a schematic diagram of a terminal device according to another embodiment of the present application;
FIG. 12 is a schematic diagram of a prediction system for predicting input information in accordance with an embodiment of the present application;
fig. 13 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below with reference to the accompanying drawings, in which various details of the embodiments of the application are included to assist understanding, and which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the embodiments of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The input information prediction method of the embodiment of the application can be applied to electronic equipment which can input text and/or audio information, such as terminal equipment and the like.
The terminal equipment may be mobile terminals such as mobile telephones (or so-called "cellular" telephones) and computers with mobile terminals, e.g. mobile devices which may be portable, pocket, hand-held, computer-included or vehicle-mounted, which exchange language and/or data with the radio access network; the terminal device may also be a Personal Communication Service (PCS) phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), a tablet computer, a Wireless modem (modem), a handheld device (handset), a laptop computer (laptop computer), a Machine Type Communication (MTC) terminal, or the like; the Terminal Device may also be referred to as a system, a Subscriber Unit (Subscriber Unit), a Subscriber Station (Subscriber Station), a Mobile Station (Mobile), a Remote Station (Remote Station), a Remote Terminal (Remote Terminal), an Access Terminal (Access Terminal), a User Terminal (User Terminal), a User agent (User agent), a User Device or User Equipment, etc., and is not limited herein.
When the terminal device is used for network communication, the terminal device of the embodiment of the present invention may be applicable to different network formats, such as narrowband Band-Internet of Things (NB-IoT), Global System for Mobile Communications (GSM), Enhanced Data rate GSM Evolution (Enhanced Data rate for GSM Evolution), Wideband Code Division Multiple Access (EDGE) WCDMA, Code Division Multiple Access (Code Division Multiple Access, CDMA2000), Time Division-synchronous Code Division Multiple Access (emd-Synchronization Code Division Multiple Access, TD-SCDMA), Long Term Evolution (Long Term Evolution, LTE), bluetooth System, WiFi System, and three application scenarios, such as url, bb and the like, of a 5G Mobile communication System.
Referring to fig. 1, fig. 1 is a schematic view illustrating an application scenario of a prediction method of input information according to an embodiment of the present application.
As shown in fig. 1, the input information prediction method according to the embodiment of the present application may be applied to a mobile phone, a smart watch, a desktop computer, and a vehicle-mounted terminal installed on a vehicle (fig. 1 shows a vehicle, but does not show a vehicle-mounted terminal).
It should be noted that fig. 1 is only an exemplary illustration of a possible application scenario of the prediction method for input information according to the embodiment of the present application, and is not to be construed as a limitation to the application scenario of the prediction method for input information according to the embodiment of the present application.
If the electronic device for inputting the text and/or audio information is a terminal device, in the related art, a server can collect input information of each terminal device, train the input information based on a neural network model, generate a prediction model for predicting the input information, and predict the input information by the server; the server may transmit the prediction model generated by the training based on the above method to each terminal device, and each terminal device may predict the input information based on the prediction model.
However, if the server performs both the training of the prediction model and the prediction of the input information, the problem of large consumption of computing resources of the server may be caused; if the server trains the prediction model and the terminal device predicts the input information, the problems of long loading time of the terminal device, low prediction efficiency and the like may be caused.
The inventor of the present application has obtained the inventive concept of the present application after creative efforts: provided is a method for predicting input information, which can reduce the consumption of computing resources of a server and can improve the efficiency of predicting input information.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
According to an aspect of an embodiment of the present application, an embodiment of the present application provides a prediction method of input information, which may be applied to a server.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for predicting input information according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes:
s101: the server acquires initial input information of the terminal device.
The execution main body of the embodiment of the application can be an electronic device with a data processing function, and the electronic device can also be an electronic device with a communication function with the electronic device for inputting text and/or audio information. For example, the implementation of the embodiment of the present application may be a server, such as a cloud-side server, which is not listed here.
In order to make the reader more clearly understand the technical solution of the prediction method of the input information in the embodiment of the present application, an execution subject is taken as a server, and an electronic device that inputs text and/or audio information is taken as a terminal device, which is described in more detail.
That is, in some embodiments, a communication link may be established between the server and the terminal device over which the server may communicate with the terminal device and may receive initial input information sent by the terminal device based on the communication link.
In other embodiments, the initial input information obtained by the server may also be input via a third party, such as uploading the initial input information to the server by a relevant staff member.
It should be noted that the method for acquiring the initial input information by the server is only used for exemplary illustration, and is not to be construed as a limitation on the method for acquiring the initial input information.
Furthermore, the initial input information in the embodiment of the present application may be used for characterizing the input information to be predicted, and the "initial" in the initial input information is used for distinguishing from other input information (such as target input information) in the following text, and cannot be understood as a limitation on the content of the input information.
The initial input information may be used for representing input information to be predicted, and the initial input information may include text information to be predicted and audio information to be predicted.
Initial sample input information is now exemplarily described in connection with fig. 3.
For example, when the method of the embodiment of the present application is applied to an application scenario as shown in fig. 1, and the terminal device is specifically a mobile phone as shown in fig. 1, the initial input information may be text information as shown in fig. 3.
As shown in fig. 3, the text message may be "weekend" that a user inputs on a mobile phone when the user chats with another user through the chat application.
Of course, in other embodiments, the initial input information may also be audio information, for example, the audio information may be audio information "address of my home" input by a certain user on a mobile phone when the certain user chats with another user through the chat application.
S102: and the server generates target input information corresponding to the initial input information according to a pre-trained prediction model, wherein the prediction model is generated based on a target parameter gradient, and the target parameter gradient is generated based on the terminal equipment, the private input information of the terminal equipment and the pre-set initial model.
Similarly, the terms "target and private" in the target input information, target parameter gradient, and private input information are not understood to be limitations on the input information and parameter gradient.
Wherein the target input information may be used to characterize input information generated after prediction of the target input information based on the prediction model.
On the basis of the example shown in fig. 3, if the initial input information received by the server is "this weekend", the "this weekend" is input to the prediction model, the target input information of "overtime" can be output, and the target input information of "overtime" can be sent to the mobile phone, and the mobile phone displays the target input information of "overtime", and the interface schematic diagram of the mobile phone can refer to fig. 3.
For another example, based on the above example, if the initial input information received by the server is "the address of my home", the "the address of my home" is input to the prediction model, the target input information "XX way XX number" may be output, and the target input information "XX way XX number" may be sent to the mobile phone, the mobile phone displays the target input information "XX way XX number", when a certain user selects "XX way XX number" through a touch screen or the like, the mobile phone may convert the "XX way XX number" into audio information and combine the audio information with the initial input information "the address of my home", so that the certain user sends the combined audio information to another user through the mobile phone.
Where the initial model may be used for characterization, there is a neural network model that trains (and may also include predictions) the input information.
The private input information can be used for representing personalized input information of the mobile phone.
The gradient can be used for characterization, and the derivative of the loss function, that is, the target parameter gradient, can be used for characterization, and when the initial model is used for training the private input information, the derivative of the loss function of the training value and the true value is calculated.
Based on the analysis, the prediction model in the embodiment of the application may be generated by the server based on the target parameter gradient fed back by the terminal device, that is, the prediction process may be implemented by the server, the training process may be implemented by the terminal device, and the training process may specifically be that the terminal device is implemented based on private input information, on one hand, by adopting the server prediction and the terminal device training mode, the disadvantage of large computing resources caused by the server performing training and prediction can be reduced, thereby implementing split type training and prediction, and further implementing the technical effects of flexibility and sufficiency of resource utilization; on the other hand, by adopting the modes of server prediction and terminal equipment training, the defect that the memory space of the terminal equipment occupies a large space due to the fact that the server performs training and the terminal performs prediction can be reduced, so that the memory space of the terminal equipment is saved, the terminal equipment is prevented from being blocked, and the technical effect of improving the prediction efficiency is achieved; on the other hand, the terminal equipment realizes training based on private input information, and can improve the individuation of training, thereby improving the individuation prediction and the pertinence prediction of a prediction model, further improving the technical effects of the accuracy and the reliability of the prediction, and improving the input experience of a user.
In some embodiments, the target parameter gradient is generated by the terminal device based on the private input information, the initial model, and the acquired common input information.
Similarly, "common" in the common input information is not understood to be a limitation to the input information.
The common input information can be used for representation, common input information collected by a server, or input information acquired from other terminal equipment.
In the embodiment of the application, the target parameter gradient is generated by the private input information and the common input information together, so that the universality and the richness of the target parameter gradient can be improved, the personalized prediction of a prediction model is realized, and the technical effects of the diversity and the flexibility of the prediction are improved.
In some embodiments, the target parameter gradient is generated by the terminal device by performing feature extraction on the common input information and the private input information, respectively, based on the initial model, performing common classification processing and private classification processing on each of the extracted features, and according to an output result of the classification processing.
That is, in the embodiment of the present application, the feature of the common input information may be extracted, or the feature of the private input information may be extracted, the feature of the common input information and the feature of the private input information are subjected to the common classification processing, the feature of the common input information and the feature of the private input information are subjected to the private classification processing, and the target parameter gradient is generated based on the output result of the common classification processing and the output result of the private classification processing.
The feature extraction of the common input information can be used for characterization, the feature extraction of the common input information is performed based on a common feature extractor in the initial model, and the common feature extractor is generated by training the collected common sample input information.
The extraction of the features of the private input information can be used for characterization, the features of the private input information are extracted based on a private feature extractor in the initial model, and the private feature extractor is generated by training the collected private sample input information.
The common classification processing of the features of the common input information and the features of the private input information can be used for characterization, the common input information and the private input information are classified based on a common classifier in the initial model, and the common classifier is generated by training the collected common sample input information.
The method comprises the steps of carrying out private classification processing on the characteristics of the common input information and the characteristics of the private input information for characterization, carrying out classification processing on the common input information and the private input information based on a private classifier in an initial model, and training and generating the acquired private sample input information by the private classifier.
In the embodiment of the application, since the object to be classified by the common classifier includes the common input information and the private input information, and the object to be classified by the private classifier also includes the common input information and the private input information, the disadvantage of poor personalization caused by training based on the common input information in the related art can be avoided, so that the diversity and the personalized technical effect of the target parameter gradient can be achieved, and the technical effect of personalized prediction of the prediction model can be further achieved.
In some embodiments, the target parameter gradient is determined from weighted average information of the output results of the common classification process and the private classification process.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for predicting input information according to another embodiment of the present application.
As shown in fig. 4, the method includes:
s201: the server obtains the sample input information sent by the terminal equipment and other terminal equipment.
Similarly, "sample" in the sample input information is not to be understood as a limitation on the content of the input information.
The sample input information may be used for characterizing input information acquired by the server from each terminal device (including the terminal device used for generating the target parameter gradient, and also including other terminal devices besides the terminal device) for performing initial model training.
It is worth to be noted that, the number of each terminal device is not limited in the embodiment of the present application, for example, the number of each terminal device may be set based on requirements, experience, experiments, and the like; similarly, the number of the sample input information is not limited in the embodiment of the present application, for example, the number of the sample input information may also be set by requirements, experiences, experiments, and the like.
In some embodiments, the attribute information of the terminal device is the same as that of the other terminal devices, and the attribute information includes: at least one of location information, type information, and usage user information.
The position information can be used for representing the coordinates of each terminal device in a world coordinate system; the type information can be used for representation, brands and models of terminal equipment and the like; the user information may be used for characterizing, for example, the age, sex, and hobbies of the user using each terminal device.
That is, a prediction model of a terminal device having the same location information, a prediction model of a terminal device having the same type information, or a model using a terminal device having the same user information may be generated, or a prediction model of a terminal device having the same location information and the same type information may be generated, and the like, and they are not listed here.
By generating the prediction models with the same attribute information, the application universality and flexibility of the prediction models can be improved, the utilization rate of prediction model resources is improved, and the technical effect of reducing the cost of training the prediction models is achieved.
S202: and the server generates an initial model according to the sample input information and a preset neural network model.
The Neural Network model in the embodiment of the present application may be a convolutional Neural Network model, a feed-forward (FF) Neural Network model, a Recursive Neural Network (RNN) Neural Network model, a Long/Short Term Memory (LSTM) Network model, or the like.
In some embodiments, S202 may include:
s2021: and the server generates initial parameter gradients corresponding to the sample input information based on the neural network model.
Similarly, "initial" in the initial parameter gradient is not to be understood as a definition of the content of the initial parameter gradient.
That is to say, in the embodiment of the present application, each sample input information is trained in a multi-thread manner, and initial parameter gradients corresponding to each sample input information are generated, so that a technical effect of improving resource utilization rate and training efficiency can be achieved.
S2022: and the server updates and iterates the neural network model according to the initial parameter gradients to generate an initial model.
S203: the server acquires initial input information of the terminal device.
For the description of S203, reference may be made to S101, which is not described herein again.
S204: and the server generates target input information corresponding to the initial input information according to a pre-trained prediction model, wherein the prediction model is generated based on a target parameter gradient, and the target parameter gradient is generated based on the terminal equipment, the private input information of the terminal equipment and the pre-set initial model.
For the description of S204, reference may be made to S102, which is not described herein again.
Based on the above example, in the implementation of the present application, the process of training and applying the prediction model is completed in a manner of server prediction + terminal device training. The foregoing examples illustrate in greater detail the principles of the application of the embodiments of the present application to the predictive model, and in order to make the reader understand the solutions of the embodiments of the present application more thoroughly, the training of the predictive model is now described in greater detail with reference to fig. 5. Fig. 5 is an interaction diagram of a training method of a prediction model according to an embodiment of the present application.
As shown in fig. 5, the method includes:
s1: the server collects input information of a plurality of terminal devices in a preset time period as sample input information.
Wherein the time period may be set based on demand, experience, experimentation, and the like.
In contrast, to improve the reliability prediction of the prediction model generated by training, the input information of a plurality of terminal devices in the latest period of time may be selected as the sample input information.
In addition, in order to improve the application universality and adaptability of the prediction model generated by training, the attribute information of the plurality of terminal devices may be the same, for example, the plurality of terminal devices are located in the same area, or the users of the plurality of terminal devices are in the same age group, or the users of the plurality of terminal devices are in the same gender, or the brands of the plurality of terminal devices are the same, or the like.
S2: and the server starts a plurality of threads, trains the sample input information of each terminal device based on the plurality of threads, and generates an initial parameter gradient corresponding to each thread.
The sample input information used for training may be at least part of the collected sample input information, that is, the sample input information used for training may be all the collected sample input information or part of the collected sample input information.
For example, by setting a data amount threshold in advance, if the data amount of the sample input information of a certain terminal device is smaller than the data amount threshold, all the sample input information is deleted.
Wherein the data volume threshold may be set based on experience, demand, and experimentation, among other things.
By screening the sample input information for training in a data volume threshold-based mode, the universality of the use of the generated prediction model can be realized, and the technical effect of saving the training cost is achieved.
In this step, the security of the sample input information of each terminal device can be ensured by a multi-thread training mode, that is, by training the sample input information of one terminal device by one thread.
For example, sample input information for P terminal devices is trained by P processes, and one process trains sample input information for one terminal device based on a neural network model.
S3: and the server carries out weighted average processing on each initial parameter gradient.
S4: and the server updates the neural network model according to the initial parameter gradient after weighted average processing to generate an initial model.
Wherein the initial model is a model satisfying the iteration requirement. Namely, the training process can be an iterative training process, and when the iteration times meet a preset iteration threshold, the training is finished, and an initial model is generated; or when the preset loss function meets the requirement, finishing training and generating the initial model.
S5: the server sends the initial model to the terminal device (which may be any of the plurality of terminal devices) and sends a training request.
Correspondingly, the terminal equipment receives the initial model and the training request sent by the server.
S6: the terminal device transmits a request for acquiring the common input information to the server.
In some embodiments, before performing S6, the terminal device may determine the data amount of the terminal device in a certain time period (which may be set based on requirements, experience, experiments, and the like), and if the data amount is greater than or equal to a certain preset threshold (which may also be set based on requirements, experience, experiments, and the like), perform S6, and if less, the terminal device may feed back information to the server that refuses to refer to training.
Accordingly, the server receives a request for obtaining the common input information sent by the terminal device.
S7: the server transmits the common input information to the terminal device.
Accordingly, the terminal device receives the common input information transmitted by the server.
S8: and the terminal equipment inputs the common input information and the private input information into the initial model to generate a target parameter gradient.
This step is now described in detail with reference to fig. 6 (fig. 6 is a schematic diagram of the generation of the target parameter gradient according to the embodiment of the present application):
the initial model may include a feature extractor for extracting features, and include a common feature extractor that may extract features of common input information (hereinafter referred to as common features) and a private feature extractor that may extract features of private input information (hereinafter referred to as private features).
The initial model further comprises a classifier for classifying the features, and the initial model comprises a common classifier and a private classifier, wherein the common classifier classifies the common features and the private features to generate a common classification result, and the private classifier classifies the common features and the private features to generate a private classification result.
And the initial model carries out weighted average processing on the common classification result and the private classification result and outputs a target parameter gradient.
Similarly, "common and private" in the common feature extractor, the private feature extractor, the common classifier, and the private classifier in this example cannot be understood as a limitation on the contents of the extractors and classifiers.
S9: and the terminal equipment sends the target parameter gradient to the server.
Correspondingly, the server receives the target parameter gradient sent by the terminal equipment.
S10: and the server generates a prediction model according to the initial model and the target parameter gradient.
So far, the training of the prediction model is finished.
As can be seen from fig. 5, in the application process, the method further includes:
s11: the terminal equipment receives initial input information input by a user.
S12: the terminal device sends initial input information to the server.
Correspondingly, the server receives the initial input information sent by the terminal equipment.
S13: and the server predicts the initial input information according to the prediction model to generate target input information.
S14: the server transmits the target input information to the terminal device.
Correspondingly, the terminal equipment receives the target input information sent by the server.
S15: and the terminal equipment displays the target input information.
According to another aspect of the embodiment of the present application, the embodiment of the present application further provides a method for predicting input information, the method is applied to a terminal device, and the method applied to a server in the above embodiment is combined to realize the prediction of the input information.
Referring to fig. 7, fig. 7 is a flowchart illustrating a method for predicting input information according to another embodiment of the present application.
As can be seen in fig. 7, the method may include:
s301: and the terminal equipment receives the initial model sent by the server.
In order to achieve concise explanation, in the embodiments of the present application, details that have been already explained in detail with respect to the above example are not repeated, and specific descriptions may refer to the above example.
S302: and the terminal equipment generates a target parameter gradient according to the initial model and the private input information of the terminal equipment.
In some embodiments, the terminal device may further receive common input information sent by the server, and when the terminal device receives the initial model and the common input information sent by the server, S302 may include: and the terminal equipment generates a target parameter gradient according to the initial model, the private input information and the common input information.
In some embodiments, the trigger condition for the terminal device to receive the common input information sent by the server may be that the terminal device determines the data amount of the private input information when receiving a training request sent by the server, and if the data amount is greater than a preset data amount threshold, the terminal device sends a request for obtaining the common input information to the server and receives the common input information sent by the server.
In some embodiments, the terminal device performs feature extraction on the private input information based on the initial model to generate a first feature vector, performs feature extraction on the common input information based on the initial model to generate a second feature vector, and performs classification processing on the first feature vector and the second feature vector based on the initial model to generate a target parameter gradient.
In some embodiments, classifying the first feature vector and the second feature vector based on the initial model, the generating the target parameter gradient comprising: performing common classification processing on the first feature vector and the second feature vector based on the initial model to generate common classification components; carrying out private classification processing on the first feature vector and the second feature vector based on the initial model to generate a private classification component; and generating a target parameter gradient according to the common classification component and the private classification component.
In some embodiments, generating the target parameter gradient from the common classification component and the private classification component comprises: and carrying out weighted average processing on the common classification component and the private classification component to generate a target parameter gradient.
The above example of the principle of generating the target parameter gradient by the terminal device has been explained in detail, and is not described herein again.
S303: and the terminal equipment sends the target parameter gradient to the server.
According to another aspect of the embodiments of the present application, there is also provided a server for performing the method according to any of the embodiments described above, such as the method for predicting the input information shown in fig. 2 and 4.
Referring to fig. 8, fig. 8 is a schematic diagram of a server according to an embodiment of the present application.
As shown in fig. 8, the server includes:
a first obtaining module 11, configured to obtain initial input information of a terminal device;
a prediction module 12, configured to generate target input information corresponding to the initial input information according to a pre-trained prediction model, where the prediction model is generated based on a target parameter gradient, and the target parameter gradient is generated based on private input information of the terminal device and a pre-set initial model of the terminal device.
In some embodiments, the target parameter gradient is generated by the terminal device based on the private input information, the initial model, and the obtained common input information.
In some embodiments, the target parameter gradient is generated by the terminal device by performing feature extraction on the common input information and the private input information respectively based on the initial model, performing common classification processing and private classification processing on each of the extracted features, and performing output results of the classification processing.
In some embodiments, the target parameter gradient is determined from weighted average information of the output results of the common classification process and the private classification process.
As can be seen in fig. 9, in some embodiments, the method further includes:
a second obtaining module 13, configured to obtain sample input information sent by the terminal device and other terminal devices;
and a first generating module 14, configured to generate the initial model according to the sample input information and a preset neural network model.
In some embodiments, the first generating module 14 is configured to generate an initial parameter gradient corresponding to each of the sample input information based on the neural network model, and perform update iteration on the neural network model according to each of the initial parameter gradients to generate the initial model.
In some embodiments, the attribute information of the terminal device and the attribute information of the other terminal device are the same, and the attribute information includes: at least one of location information, type information, and usage user information.
According to another aspect of the embodiment of the present application, an embodiment of the present application further provides a terminal device, configured to execute the method shown in fig. 7.
Referring to fig. 10, fig. 10 is a schematic diagram of a terminal device according to an embodiment of the present application.
As shown in fig. 10, the terminal device includes:
a first receiving module 21, configured to receive an initial model sent by a server;
a second generating module 22, configured to generate a target parameter gradient according to the initial model and the private input information of the terminal device;
a first sending module 23, configured to send the target parameter gradient to the server.
In some embodiments, the first receiving module 21 is configured to receive common input information sent by the server;
the second generating module 22 is configured to generate the target parameter gradient according to the initial model, the common input information, and the private input information.
In some embodiments, the second generating module 22 is configured to perform feature extraction on the private input information based on the initial model to generate a first feature vector, perform feature extraction on the common input information based on the initial model to generate a second feature vector, and perform classification processing on the first feature vector and the second feature vector based on the initial model to generate the target parameter gradient.
In some embodiments, the second generating module 22 is configured to perform common classification processing on the first feature vector and the second feature vector based on the initial model to generate a common classification component, perform private classification processing on the first feature vector and the second feature vector based on the initial model to generate a private classification component, and generate the target parameter gradient according to the common classification component and the private classification component.
In some embodiments, the second generating module 22 is configured to perform weighted average processing on the common classification component and the private classification component to generate the target parameter gradient.
As can be seen in fig. 11, in some embodiments, the method further includes:
a second receiving module 24, configured to receive a training request sent by the server;
a determining module 25, configured to determine a data amount of the private input information;
a second sending module 26, configured to send, to the server, a request for obtaining the common input information if the data amount is greater than a preset data amount threshold.
According to another aspect of the embodiments of the present application, a prediction system for predicting input information is also provided.
Referring to fig. 12, fig. 12 is a schematic diagram of a prediction system for predicting input information according to an embodiment of the present disclosure.
As shown in fig. 12, the system includes: the server according to any of the above embodiments, such as the server shown in fig. 8 and 9, further includes the terminal device according to any of the above embodiments, such as the terminal device shown in fig. 10 and 11.
It should be noted that, the server in fig. 12 is exemplified by a cloud server, and the server is not understood as a limitation to the server, and similarly, the terminal device in fig. 12 is only used for exemplary illustration and is not understood as a limitation to the terminal device.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Referring to fig. 13, fig. 13 is a block diagram of an electronic device according to an embodiment of the present application.
Electronic devices are intended to represent, among other things, various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of embodiments of the present application described and/or claimed herein.
As shown in fig. 13, the electronic apparatus includes: one or more processors 101, memory 102, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 13 illustrates an example of one processor 101.
The memory 102 is a non-transitory computer readable storage medium provided by the embodiments of the present application. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the input information prediction method provided by the embodiments of the present application. The non-transitory computer-readable storage medium of the embodiments of the present application stores computer instructions for causing a computer to perform the prediction method of input information provided by the embodiments of the present application.
Memory 102, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules in embodiments of the present application. The processor 101 executes various functional applications of the server and data processing, i.e., implementing the prediction method of the input information in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 102.
The memory 102 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 102 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 102 may optionally include memory located remotely from processor 101, which may be connected to an electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, Block-chain-Based Service Networks (BSNs), mobile communication networks, and combinations thereof.
The electronic device may further include: an input device 103 and an output device 104. The processor 101, the memory 102, the input device 103, and the output device 104 may be connected by a bus or other means, and the bus connection is exemplified in fig. 13.
The input device 103 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 104 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Block-chain-Based Service Networks (BSNs), Wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solution of the present application can be achieved, and the present invention is not limited thereto.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (29)

1. A prediction method of input information, which is applied to a server, comprises the following steps:
acquiring initial input information of terminal equipment;
and generating target input information corresponding to the initial input information according to a pre-trained prediction model, wherein the prediction model is generated based on a target parameter gradient, and the target parameter gradient is generated based on the private input information of the terminal equipment and a pre-set initial model of the terminal equipment.
2. The method of claim 1, wherein the target parameter gradient is generated by the terminal device based on the private input information, the initial model, and the obtained common input information.
3. The method according to claim 2, wherein the target parameter gradient is generated by the terminal device by performing feature extraction on the common input information and the private input information, respectively, based on the initial model, performing common classification processing and private classification processing on each of the extracted features, and generating an output result of the classification processing.
4. The method of claim 3, wherein the target parameter gradient is determined from weighted average information of output results of the common classification process and the private classification process.
5. The method of any of claims 1 to 4, further comprising:
acquiring sample input information sent by the terminal equipment and other terminal equipment;
and generating the initial model according to the sample input information and a preset neural network model.
6. The method of claim 5, wherein the generating the initial model from the sample input information and a preset neural network model comprises:
generating initial parameter gradients corresponding to the sample input information based on the neural network model;
and updating and iterating the neural network model according to each initial parameter gradient to generate the initial model.
7. The method of claim 5, wherein the terminal device and the other terminal device have the same attribute information, the attribute information comprising: at least one of location information, type information, and usage user information.
8. A prediction method of input information is applied to a terminal device and comprises the following steps:
receiving an initial model sent by a server;
generating a target parameter gradient according to the initial model and the private input information of the terminal equipment;
and sending the target parameter gradient to the server.
9. The method of claim 8, further comprising:
receiving common input information sent by the server;
and generating a target parameter gradient according to the initial model and the private input information of the terminal device comprises: and generating the target parameter gradient according to the initial model, the common input information and the private input information.
10. The method of claim 9, wherein the generating the target parameter gradient from the initial model, the common input information, and private input information comprises:
performing feature extraction on the private input information based on the initial model to generate a first feature vector;
performing feature extraction on the common input information based on the initial model to generate a second feature vector;
and classifying the first feature vector and the second feature vector based on the initial model to generate the target parameter gradient.
11. The method of claim 10, wherein the classifying the first and second feature vectors based on the initial model, the generating the target parameter gradient comprises:
performing common classification processing on the first feature vector and the second feature vector based on the initial model to generate common classification components;
carrying out private classification processing on the first feature vector and the second feature vector based on the initial model to generate a private classification component;
and generating the target parameter gradient according to the common classification component and the private classification component.
12. The method of claim 11, wherein the generating the target parameter gradient from the common classification component and the private classification component comprises:
and carrying out weighted average processing on the common classification component and the private classification component to generate the target parameter gradient.
13. The method of any of claims 9 to 12, further comprising:
receiving a training request sent by the server;
determining a data amount of the private input information;
and if the data volume is larger than a preset data volume threshold value, sending a request for acquiring the common input information to the server.
14. A server, comprising:
the first acquisition module is used for acquiring initial input information of the terminal equipment;
and the prediction module is used for generating target input information corresponding to the initial input information according to a pre-trained prediction model, wherein the prediction model is generated based on a target parameter gradient, and the target parameter gradient is generated based on the private input information of the terminal equipment and a pre-set initial model.
15. The server according to claim 14, wherein the target parameter gradient is generated by the terminal device based on the private input information, the initial model and the obtained common input information.
16. The server according to claim 15, wherein the target parameter gradient is generated by the terminal device by performing feature extraction on the common input information and the private input information, respectively, based on the initial model, performing common classification processing and private classification processing on each of the extracted features, and according to an output result of the classification processing.
17. The server according to claim 16, wherein the target parameter gradient is determined from weighted average information of output results of the common classification process and the private classification process.
18. The server of any of claims 14 to 17, further comprising:
the second acquisition module is used for acquiring sample input information sent by the terminal equipment and other terminal equipment;
and the first generation module is used for generating the initial model according to the sample input information and a preset neural network model.
19. The server of claim 18, wherein the first generating module is configured to generate an initial parameter gradient corresponding to each of the sample input information based on the neural network model, and perform update iteration on the neural network model according to each of the initial parameter gradients to generate the initial model.
20. The server according to claim 18, wherein the terminal device and the other terminal device have the same attribute information, the attribute information including: at least one of location information, type information, and usage user information.
21. A terminal device, comprising:
the first receiving module is used for receiving the initial model sent by the server;
the second generation module is used for generating a target parameter gradient according to the initial model and the private input information of the terminal equipment;
and the first sending module is used for sending the target parameter gradient to the server.
22. The terminal device according to claim 21, wherein the first receiving module is configured to receive common input information sent by the server;
and the second generation module is used for generating the target parameter gradient according to the initial model, the common input information and the private input information.
23. The terminal device of claim 22, wherein the second generating module is configured to perform feature extraction on the private input information based on the initial model to generate a first feature vector, perform feature extraction on the common input information based on the initial model to generate a second feature vector, and perform classification processing on the first feature vector and the second feature vector based on the initial model to generate the target parameter gradient.
24. The terminal device according to claim 23, wherein the second generating module is configured to perform common classification processing on the first feature vector and the second feature vector based on the initial model to generate a common classification component, perform private classification processing on the first feature vector and the second feature vector based on the initial model to generate a private classification component, and generate the target parameter gradient according to the common classification component and the private classification component.
25. The terminal device of claim 24, wherein the second generating module is configured to perform weighted average processing on the common classification component and the private classification component to generate the target parameter gradient.
26. The terminal device of any of claims 22 to 25, further comprising:
the second receiving module is used for receiving the training request sent by the server;
the determining module is used for determining the data volume of the private input information;
and the second sending module is used for sending a request for acquiring the common input information to the server if the data volume is larger than a preset data volume threshold.
27. A predictive system for inputting information, comprising:
the server of any one of claims 14 to 20;
a terminal device according to any one of claims 21 to 26.
28. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7; or to enable the at least one processor to perform the method of any one of claims 8-13.
29. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7; or causing the computer to perform the method of any one of claims 8-13.
CN202010589833.1A 2020-06-24 2020-06-24 Input information prediction method, system, server and electronic equipment Active CN111736712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010589833.1A CN111736712B (en) 2020-06-24 2020-06-24 Input information prediction method, system, server and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010589833.1A CN111736712B (en) 2020-06-24 2020-06-24 Input information prediction method, system, server and electronic equipment

Publications (2)

Publication Number Publication Date
CN111736712A true CN111736712A (en) 2020-10-02
CN111736712B CN111736712B (en) 2023-08-18

Family

ID=72651025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010589833.1A Active CN111736712B (en) 2020-06-24 2020-06-24 Input information prediction method, system, server and electronic equipment

Country Status (1)

Country Link
CN (1) CN111736712B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549951A (en) * 2020-11-26 2022-05-27 未岚大陆(北京)科技有限公司 Method for obtaining training data, related device, system and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011013610A1 (en) * 2009-07-31 2011-02-03 富士フイルム株式会社 Image processing device and method, data processing device and method, program, and recording medium
US20160042254A1 (en) * 2014-08-07 2016-02-11 Canon Kabushiki Kaisha Information processing apparatus, control method for same, and storage medium
CN109902446A (en) * 2019-04-09 2019-06-18 北京字节跳动网络技术有限公司 Method and apparatus for generating information prediction model
CN109993194A (en) * 2018-01-02 2019-07-09 北京京东尚科信息技术有限公司 Data processing method, system, electronic equipment and computer-readable medium
CN110377790A (en) * 2019-06-19 2019-10-25 东南大学 A kind of video automatic marking method based on multi-modal privately owned feature
CN110569505A (en) * 2019-09-04 2019-12-13 平顶山学院 text input method and device
CN110874440A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 Information pushing method and device, model training method and device, and electronic equipment
CN111261144A (en) * 2019-12-31 2020-06-09 华为技术有限公司 Voice recognition method, device, terminal and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011013610A1 (en) * 2009-07-31 2011-02-03 富士フイルム株式会社 Image processing device and method, data processing device and method, program, and recording medium
US20160042254A1 (en) * 2014-08-07 2016-02-11 Canon Kabushiki Kaisha Information processing apparatus, control method for same, and storage medium
CN109993194A (en) * 2018-01-02 2019-07-09 北京京东尚科信息技术有限公司 Data processing method, system, electronic equipment and computer-readable medium
CN109902446A (en) * 2019-04-09 2019-06-18 北京字节跳动网络技术有限公司 Method and apparatus for generating information prediction model
CN110377790A (en) * 2019-06-19 2019-10-25 东南大学 A kind of video automatic marking method based on multi-modal privately owned feature
CN110569505A (en) * 2019-09-04 2019-12-13 平顶山学院 text input method and device
CN111261144A (en) * 2019-12-31 2020-06-09 华为技术有限公司 Voice recognition method, device, terminal and storage medium
CN110874440A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 Information pushing method and device, model training method and device, and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YUNG-YAO CHEN: "An objective tracking method based on Kalman filter", 2016 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND INTELLIGENT SYSTEMS (ARIS) *
夏冉;: "基于Spark的机器学习Web服务引擎设计", 指挥控制与仿真, no. 01 *
张战成;王士同;钟富礼;: "具有隐私保护功能的协作式分类机制", 计算机研究与发展, no. 06 *
赵璐?;岁波;罗海琼;陈旭;宋晓霞;洪平;: "基于BERT特征的双向LSTM神经网络在中文电子病历输入推荐中的应用", 中国数字医学, no. 04 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549951A (en) * 2020-11-26 2022-05-27 未岚大陆(北京)科技有限公司 Method for obtaining training data, related device, system and storage medium
CN114549951B (en) * 2020-11-26 2024-04-23 未岚大陆(北京)科技有限公司 Method for obtaining training data, related device, system and storage medium

Also Published As

Publication number Publication date
CN111736712B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
US20130066815A1 (en) System and method for mobile context determination
CN112102448B (en) Virtual object image display method, device, electronic equipment and storage medium
CN111680517B (en) Method, apparatus, device and storage medium for training model
CN114157701B (en) Task testing method, device, equipment and storage medium
CN112784989A (en) Inference system, inference method, electronic device, and computer storage medium
CN114726906B (en) Device interaction method, device, electronic device and storage medium
CN112446574B (en) Product evaluation method, device, electronic equipment and storage medium
CN111736712B (en) Input information prediction method, system, server and electronic equipment
CN111580883B (en) Application program starting method, device, computer system and medium
CN111615171B (en) Access method and device of wireless local area network
CN110517079B (en) Data processing method and device, electronic equipment and storage medium
CN112580723A (en) Multi-model fusion method and device, electronic equipment and storage medium
CN112527527A (en) Consumption speed control method and device of message queue, electronic equipment and medium
CN110309462B (en) Data display method and system
CN111858030A (en) Job resource processing method and device, electronic equipment and readable storage medium
CN112559867B (en) Business content output method, device, equipment, storage medium and program product
CN113747423B (en) Cloud mobile phone state synchronization method, device, equipment, storage medium and program product
CN111625710B (en) Processing method and device of recommended content, electronic equipment and readable storage medium
CN111783643B (en) Face recognition method and device, electronic equipment and storage medium
CN111563202B (en) Resource data processing method, device, electronic equipment and medium
CN114461106A (en) Display method and device and electronic equipment
CN111651229A (en) Font changing method, device and equipment
CN111783872A (en) Method and device for training model, electronic equipment and computer readable storage medium
CN111177558A (en) Channel service construction method and device
CN115145730B (en) Operation monitoring method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant