CN115373718A - Updating method and device of online model and electronic equipment - Google Patents

Updating method and device of online model and electronic equipment Download PDF

Info

Publication number
CN115373718A
CN115373718A CN202211317187.9A CN202211317187A CN115373718A CN 115373718 A CN115373718 A CN 115373718A CN 202211317187 A CN202211317187 A CN 202211317187A CN 115373718 A CN115373718 A CN 115373718A
Authority
CN
China
Prior art keywords
model
data
updated
online
test data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211317187.9A
Other languages
Chinese (zh)
Inventor
刘国明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202211317187.9A priority Critical patent/CN115373718A/en
Publication of CN115373718A publication Critical patent/CN115373718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Security & Cryptography (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides an updating method and device of an online model and electronic equipment, and relates to the technical field of artificial intelligence. The method comprises the following specific steps: acquiring updated training data according to user data aiming at the online model; training the online model according to the updated training data to obtain an updated model; and testing the prediction accuracy of the updated model according to the test data in the universal scene, and enabling the updated model to be online according to the prediction accuracy. The online model is trained by updating the training data to obtain the updated model, and whether the updated model is online is determined by the test data in the pervasive scene, so that the online model is updated according to real user feedback data, the reduction of the accuracy of a prediction result caused by the lag of the online model is avoided, and the accuracy of the online model is improved.

Description

Updating method and device of online model and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to an updating method and apparatus for an online model, and an electronic device.
Background
In the related art, in various common machine learning frameworks, a relatively mature offline parallel training environment is provided for the update iteration of a machine learning model. Traditionally, an offline model is generally trained and then deployed in an online environment. After the offline trained model is deployed in an online environment, the model parameters are fixed and cannot change, so that the online model cannot be further improved.
Disclosure of Invention
The present disclosure provides an update method, device and system for an online model, so as to at least solve the problem that the online model cannot be further improved in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an update method of an online model, including:
acquiring updated training data according to user data aiming at the online model;
training the online model according to the updated training data to obtain an updated model;
and testing the prediction accuracy of the updated model according to the test data in the universal scene, and uploading the updated model according to the prediction accuracy.
Optionally, the step of testing the prediction accuracy of the update model according to the test data in the ubiquitous scene specifically includes:
inputting the test data under the pervasive scene into the updating model to carry out reasoning operation so as to generate a prediction result, wherein the test data under the pervasive scene comprises basic test data and prediction test data;
and comparing the prediction result with label data corresponding to the test data in the pervasive scene to obtain the prediction accuracy, wherein the label data corresponding to the test data in the pervasive scene comprises label data corresponding to the basic test data and label data corresponding to the prediction test data.
Optionally, the basic test data is preset data, the label data corresponding to the basic test data is preset label data, and the prediction test data is updated training data in the current update period.
Optionally, the method for obtaining the label data corresponding to the predicted test data includes:
inputting the prediction test data into a pre-trained prediction model for reasoning operation, and taking the generated result as label data corresponding to the prediction test data.
Optionally, the method for obtaining label data corresponding to the predicted test data further includes:
and inputting the predicted test data into an online model in the mth update period before the current update period to perform inference operation, and taking a generated result as label data corresponding to the predicted test data, wherein m is a positive integer.
Optionally, the method for obtaining the label data corresponding to the predicted test data further includes:
and taking the updated training data in the nth updating period before the current updating period as the label data corresponding to the predicted test data, wherein n is a positive integer.
Optionally, the online updating of the model according to the prediction accuracy specifically includes:
responding to the prediction accuracy rate being larger than or equal to a preset accuracy rate threshold value, and enabling the updated model to be online;
in response to the prediction accuracy being less than the accuracy threshold, retraining the online model according to the updated training data.
Optionally, the obtaining updated training data according to the user data for the online model includes:
and collecting user data input to the online model, and cleaning the user data to obtain updated training data.
Optionally, the training the online model according to the updated training data to obtain an updated model includes:
sending the updated training data to a training module through a message queue, and instructing the training module to train model parameters in the online model according to the updated training data until the corresponding loss function is converged;
and constructing the updated model according to the updated model parameters.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for updating an online model, including:
the data acquisition module is used for acquiring updated training data according to the user data aiming at the online model;
the model updating module is used for training the online model according to the updating training data to obtain an updating model;
and the model testing module is used for testing the prediction accuracy of the updated model according to the test data in the universal scene and enabling the updated model to be online according to the prediction accuracy.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any of the first aspects above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method according to any one of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of the above first aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the online model is trained by updating the training data to obtain the updated model, and whether the updated model is online is determined by testing data in a universal scene, so that the online model is updated according to real user feedback data, the reduction of accuracy of a prediction result caused by the lag of the online model is avoided, and the accuracy of the online model is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flowchart illustrating a method for updating an online model, according to an example embodiment.
FIG. 2 is a flow chart illustrating a method for updating an online model in accordance with an exemplary embodiment.
FIG. 3 is a flow chart illustrating a method for updating an online model in accordance with an exemplary embodiment.
FIG. 4 is a block diagram illustrating an update system for an online model in accordance with an exemplary embodiment.
FIG. 5 is a flow chart illustrating a method for updating an online model in accordance with an exemplary embodiment.
FIG. 6 is a block diagram illustrating an apparatus for updating an online model in accordance with an exemplary embodiment.
FIG. 7 is a block diagram illustrating an apparatus in accordance with an example embodiment.
FIG. 8 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure as recited in the claims appended hereto.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) referred to in the present disclosure is information authorized by the user or sufficiently authorized by each party.
The online model is a machine learning model, and the updating iteration of the online model is carried out in the following modes:
one way is to label data through crowdsourcing service on the internet, that is, data markers on a crowdsourcing platform label data to obtain a large amount of labeled data for model training of a machine learning system. The model training is performed off-line, and the on-line model is updated regularly after the training is completed, so that the effect of the on-line model is improved. The problem of this approach is that the understanding of the labeling standard by each data marker on the crowdsourcing platform is inconsistent, and the marker is not the real feedback of the user, so the accuracy of the data is poor, and the timeliness is difficult to guarantee.
The other mode is to adjust the model quickly in real time according to the on-line feedback data, so that the model reflects the on-line change in time. This way, the behavior data of the user can be fed back in time, and a new model can be regenerated. However, there are some problems in this approach, for example, the prediction result obtained by the algorithm model trained in this way in the inference operation is more accurate in the current scenario. But the accuracy of the prediction results may be poor in some ubiquitous scenarios.
Fig. 1 is a flow chart illustrating a method for updating an online model, according to an exemplary embodiment, the method including the following steps, as shown in fig. 1.
Step 101, obtaining updated training data according to user data aiming at an online model;
in the embodiment of the present application, the online model is a machine learning model, including but not limited to a Logistic Regression (LR) model, a Support Vector Machine (SVM), a gradient boosting decision tree, or a deep neural network. The online model is deployed online after being trained in an offline state, can receive user data, and performs reasoning operation according to the user data to generate a prediction result.
However, the user data is continuously updated and changed, and in order to adapt the online model to the change of the user data, the online model needs to be iterated. The data sources used for iterative training are: and processing the user data input into the online model to obtain updated training data. The online model can be trained according to the updated training data, and parameters in the online model are updated.
In a possible embodiment, the user data is a commodity browsing record of the user on the e-commerce platform, the online model performs inference operation according to the user data in a time period to generate a prediction result, and the prediction result is a commodity attribute recommendation list and represents attributes of commodities which the user may browse later. And the commodity can be pushed for the user in a more targeted manner according to the prediction result so as to improve the use experience of the user.
But the commodity browsing preference of the user at different periods constantly changes, in order to enable the online model to be adjusted along with the browsing preference of the user, the online model needs to be updated regularly, a certain updating period is preset, the user data is collected in each updating period, and the user model is updated according to the user data in the updating period after the updating period is finished, so that the online model can be better adapted to the preference of the user. Optionally, the update period is one day.
Step 102, training the online model according to the updated training data to obtain an updated model;
in the embodiment of the application, the online model is trained and adjusted according to the update data, the parameters in the online model are updated iteratively, the updated model can be formed by the parameters obtained after one or more updates, and the frame of the updated model is the same as that of the online model, except that the values of one or more parameters are different.
Step 103, testing the prediction accuracy of the updated model according to the test data in the universal scene, and enabling the updated model to be online according to the prediction accuracy.
In the embodiment of the application, the model is updated according to the updated training data, and the prediction result obtained by performing reasoning operation on the user data corresponding to the updated data is accurate. However, the online model also considers the prediction accuracy in a ubiquitous scene, and the accuracy of prediction in a general scene cannot be lost in order to adapt to user data in a period of time. Therefore, before the updating model is online, the prediction accuracy of the updating model is tested according to the test data in the universal scene, and if the prediction accuracy reaches the standard, the prediction accuracy of the updating model to the universal scene is better, and the updating model can be online. If the prediction accuracy rate does not reach the standard, the prediction accuracy rate of the updated model to the universal scene is poor, and the updated model cannot be brought online.
The acquiring of updated training data according to user data for the online model includes:
and collecting user data input to the online model, and cleaning the user data to obtain updated training data.
In the embodiment of the application, more user data are input into the online model, and the data need to be cleaned in order to improve the training efficiency. The data cleaning is to screen and remove repeated and redundant data, completely supplement missing data, correct or delete wrong data, and finally arrange the data into data which can be used for training, namely updating training data.
For user data with missing data, the simplest and most effective method can be used to process the missing value. That is, samples with missing values are discarded directly. It is also possible to divide the data into several groups according to the attribute with the largest correlation coefficient of the missing attribute, then calculate the mean value of each group separately, and put the mean values into the missing values.
When abnormal user data, such as data obviously not conforming to the laws, is cleaned, a certain value range can be set, and the user data beyond the value range is deleted.
Fig. 2 is a flowchart illustrating an updating method of an online model according to an exemplary embodiment, and step 103 in the flowchart includes the following steps, as shown in fig. 1.
Step 201, inputting the test data in the general scene into the update model to perform inference operation to generate a prediction result, wherein the test data in the general scene includes basic test data and prediction test data.
In the embodiment of the application, when the updated model is tested according to the test data in the pervasive scene, the test data in the pervasive scene is input into the updated model, and the updated model extracts and analyzes the features in the test data to generate the prediction result.
In this embodiment, the test data in the universal scenario is set for some commonly-applicable application scenarios of the online model, and the online model in this embodiment is a model with a prediction function, so that the test data in the universal scenario may configure the test data for the commonly-applicable application scenarios of the prediction model. For example, in an application scenario of weather forecast, the test data may be various weather data of the current day, and the corresponding tag data is weather of the next day; in an application scenario of user shopping preference prediction, the test data may be data of a commodity browsed by the user on the same day, and the corresponding tag data is a commodity browsed by the user on the next day.
Step 202, comparing the prediction result with the label data corresponding to the test data in the pervasive scene to obtain the prediction accuracy, wherein the label data corresponding to the test data in the pervasive scene includes the label data corresponding to the basic test data and the label data corresponding to the prediction test data.
In the embodiment of the application, the label data corresponding to the test data in the pervasive scene is an ideal result, and if the prediction result is the same as the label data corresponding to the test data, the prediction result of the updated model is accurate; if the prediction result is different from the label data corresponding to the test data, the prediction result of the updated model is inaccurate, the times of the accurate prediction result are counted, and the proportion of the times of the accurate prediction result to the total times of the prediction is calculated, so that the prediction accuracy can be obtained.
Optionally, the basic test data is preset data, the label data corresponding to the basic test data is preset label data, and the prediction test data is updated training data in the current update period.
In the embodiment of the application, the test data in the pervasive scene is composed of basic test data and prediction test data, the basic test data is preset test data in the pervasive scene, and the corresponding label data is also preset. The predictive test data is updated training data in a current update period.
Optionally, the method for obtaining the label data corresponding to the predicted test data includes:
inputting the prediction test data into a pre-trained prediction model for reasoning operation, and taking the generated result as label data corresponding to the prediction test data.
In the embodiment of the present application, since data in the next update period cannot be obtained in the current update period as tag data of the predicted test data, a prediction mode needs to be adopted to obtain the tag data of the predicted test data. In this embodiment, inference operation is performed on the prediction test data, that is, the updated training data in the current update period, according to the pre-trained prediction model, so as to obtain a possible result in the next update period, and the result is used as the label data corresponding to the prediction test data.
In a possible embodiment, the updating period is 1 day, today's updated training data is used as the prediction test data and input into a pre-trained prediction model, a prediction result of tomorrow data is obtained, the result is used as the prediction data, and the prediction accuracy of the update model is tested according to the prediction data.
Optionally, the method for obtaining label data corresponding to the predicted test data further includes:
and inputting the predicted test data into an online model in the mth update period before the current update period to perform inference operation, and taking a generated result as label data corresponding to the predicted test data, wherein m is a positive integer.
In the embodiment of the present application, the on-line model trained in the previous training period may be used to perform inference operation on the predicted test data to obtain a predicted result, and the predicted result is used as the label data corresponding to the predicted test data.
In a possible embodiment, if the update period is 1 day, performing inference operation on the predicted test data by using an online model on the same day of the previous week to generate a predicted result, where m takes a value of 7, and using the result as the label data corresponding to the predicted test data.
Optionally, the method for obtaining the prediction data further includes:
and taking the updated training data in the nth updating period before the current updating period as the label data corresponding to the predicted test data, wherein n is a positive integer.
In this embodiment of the application, the updated training data in the previous update period may be used as the label data corresponding to the predicted test data.
In a possible embodiment, the update cycle is 1 day, the user data of the next day of the last week, that is, before 6 days, may be subjected to data cleaning and then used as the label data corresponding to the prediction test data, where the value of n is 6, and the prediction accuracy of the update model is detected according to the prediction data test and the corresponding label data.
Optionally, the online updating of the model according to the prediction accuracy specifically includes:
responding to the prediction accuracy rate being larger than or equal to a preset accuracy rate threshold value, and enabling the updated model to be online;
in response to the prediction accuracy being less than the accuracy threshold, retraining the online model according to the updated training data.
In a possible embodiment, the preset accuracy threshold is 98%, the updated model can be brought online only when the prediction accuracy reaches 98% or is greater than 98%, otherwise, the online model needs to be trained again according to the updated training data.
Fig. 3 is a flowchart illustrating an updating method of an online model according to an exemplary embodiment, and as shown in fig. 3, step 102 in fig. 1 specifically includes the following steps.
Step 301, sending the updated training data to a training module through a message queue, and instructing the training module to train model parameters in the online model according to the updated training data until a corresponding loss function converges.
In the embodiment of the application, after the user data is cleaned to obtain the updated training data, the online model can be trained according to the updated training data, and the updated training data is sent to the training module through the message queue. The training module is a module deployed on line and used for training the model. And the training module inputs the updated training data into the online model for training, and adjusts model parameters in the online model until the loss function is converged.
Step 302, constructing the updated model according to the updated model parameters.
And after the loss function is converged, adjusting the on-line model according to the updated model parameters to obtain the updated model.
FIG. 4 is a block diagram illustrating an update system for an online model in accordance with an exemplary embodiment. As shown in fig. 4, the system includes: the system comprises an online service module, a message queue, a training module and a testing module.
In the operation process of the on-line model updating system, the on-line service module comprises the on-line model, and after real-time user data is input into the on-line service module in the on-line service module, inference operation is carried out to generate a prediction result and the prediction result is fed back to a user. And meanwhile, updated training data generated after the user data are cleaned are sent to a training module through a message queue, and meanwhile, the online model is sent to the training module. And the training module trains the online model according to the updated training data to generate new model parameters, and the updated model can be generated according to the model parameters. Inputting the updated model into a training module, inputting the test data in the universal scene into the updated model by the training module to carry out reasoning operation to generate a predicted result, counting the prediction accuracy of the predicted result according to a label corresponding to the test data in the universal scene, and determining whether to bring the updated model online according to the prediction accuracy, namely replacing the online model in the online service module with the updated model.
Optionally, the message queue is a Talos message queue, and is configured to input the updated training data to the training module in sequence.
FIG. 5 is a flowchart illustrating a method for updating an online model, as shown in FIG. 5, in accordance with an exemplary embodiment, the method comprising:
after training is started, the testing module updates parameters in the model and calculates a loss function, wherein the loss function reflects the difference between the prediction result and the real result of the model, and when the loss function value is greater than or equal to a preset threshold value, model training is continued; and when the loss function is smaller than a preset threshold value, stopping model training, and generating an updated model according to the model parameters obtained by training.
Uploading the updating module to a testing module, testing the prediction accuracy of the updating model by the testing module according to the testing data and the corresponding label data in the universal scene, and when the prediction accuracy is greater than or equal to a preset accuracy threshold, enabling the updating model to pass the test and enabling the updating model to be online; and when the prediction accuracy is lower than the preset accuracy threshold, the updated model fails the test, and the model needs to be trained again.
FIG. 6 is a block diagram illustrating an apparatus for updating an online model in accordance with an exemplary embodiment. As shown in fig. 6, the apparatus includes:
a data acquisition module 610, configured to acquire updated training data according to user data for the online model;
a model updating module 620, configured to train the online model according to the updated training data to obtain an updated model;
the model testing module 630 is configured to test the prediction accuracy of the updated model according to the test data in the ubiquitous scene, and to bring the updated model online according to the prediction accuracy.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 7 is a block diagram illustrating an apparatus 700 for update of an online model, according to an example embodiment. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operation at the device 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 further includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of apparatus 700, sensor assembly 714 may also detect a change in position of apparatus 700 or a component of apparatus 700, the presence or absence of user contact with apparatus 700, orientation or acceleration/deceleration of apparatus 700, and a change in temperature of apparatus 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the apparatus 700 to perform the method described above is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
FIG. 8 is a block diagram illustrating an apparatus 800 for update of an online model, according to an example embodiment. For example, the apparatus 800 may be provided as a server. Referring to fig. 8, the apparatus 800 includes a processing component 822, which further includes one or more processors, and memory resources, represented by memory 832, for storing instructions, such as applications, that may be executed by the processing component 822. The application programs stored in memory 832 may include one or more modules that each correspond to a set of instructions. Further, the processing component 822 is configured to execute instructions to perform the above-described methods.
The device 800 may also include a power component 826 configured to perform power management of the device 800, a wired or wireless network interface 850 configured to connect the device 800 to a network, and an input/output (I/O) interface 858. The apparatus 800 may operate based on an operating system stored in the memory 832, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. An update method of an online model, comprising:
acquiring updated training data according to user data aiming at the online model;
training the online model according to the updated training data to obtain an updated model;
and testing the prediction accuracy of the updated model according to the test data in the universal scene, and enabling the updated model to be online according to the prediction accuracy.
2. The method of claim 1, wherein the testing the prediction accuracy of the updated model according to the test data in the pervasive scenario specifically comprises:
inputting the test data under the pervasive scene into the updating model to carry out reasoning operation so as to generate a prediction result, wherein the test data under the pervasive scene comprises basic test data and prediction test data;
and comparing the prediction result with label data corresponding to the test data in the pervasive scene to obtain the prediction accuracy, wherein the label data corresponding to the test data in the pervasive scene comprises label data corresponding to the basic test data and label data corresponding to the prediction test data.
3. The method according to claim 2, wherein the basic test data is preset data, the label data corresponding to the basic test data is preset label data, and the predicted test data is updated training data in a current update period.
4. The method according to claim 2, wherein the method for obtaining the label data corresponding to the predicted test data comprises:
inputting the prediction test data into a pre-trained prediction model for reasoning operation, and taking the generated result as label data corresponding to the prediction test data.
5. The method of claim 4, wherein the method for obtaining the label data corresponding to the predicted test data further comprises:
and inputting the predicted test data into an online model in the mth update period before the current update period to perform inference operation, and taking a generated result as label data corresponding to the predicted test data, wherein m is a positive integer.
6. The method of claim 4, wherein the method for obtaining the label data corresponding to the predicted test data further comprises:
and taking the updated training data in the nth updating period before the current updating period as the label data corresponding to the predicted test data, wherein n is a positive integer.
7. The method of claim 1, wherein said bringing the updated model online according to the prediction accuracy comprises:
responding to the prediction accuracy rate being larger than or equal to a preset accuracy rate threshold value, and enabling the updated model to be online;
in response to the prediction accuracy being less than the accuracy threshold, retraining the online model according to the updated training data.
8. The method of claim 1, wherein the updating the training data based on the user data for the online model comprises:
and collecting user data input to the online model, and cleaning the user data to obtain updated training data.
9. The method of claim 1, wherein training the online model based on the updated training data to obtain an updated model comprises:
sending the updated training data to a training module through a message queue, and instructing the training module to train model parameters in the online model according to the updated training data until the corresponding loss function is converged;
and constructing the updated model according to the updated model parameters.
10. An apparatus for updating an online model, comprising:
the data acquisition module is used for acquiring updated training data according to the user data aiming at the online model;
the model updating module is used for training the online model according to the updating training data to obtain an updating model;
and the model testing module is used for testing the prediction accuracy of the updated model according to the test data in the universal scene and enabling the updated model to be online according to the prediction accuracy.
11. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 9.
12. A computer-readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-9.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 9.
CN202211317187.9A 2022-10-26 2022-10-26 Updating method and device of online model and electronic equipment Pending CN115373718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211317187.9A CN115373718A (en) 2022-10-26 2022-10-26 Updating method and device of online model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211317187.9A CN115373718A (en) 2022-10-26 2022-10-26 Updating method and device of online model and electronic equipment

Publications (1)

Publication Number Publication Date
CN115373718A true CN115373718A (en) 2022-11-22

Family

ID=84074349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211317187.9A Pending CN115373718A (en) 2022-10-26 2022-10-26 Updating method and device of online model and electronic equipment

Country Status (1)

Country Link
CN (1) CN115373718A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115952529A (en) * 2023-03-09 2023-04-11 北京云安腾宇科技有限公司 User data processing method, computing device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705717A (en) * 2019-09-30 2020-01-17 支付宝(杭州)信息技术有限公司 Training method, device and equipment of machine learning model executed by computer
CN111080413A (en) * 2019-12-20 2020-04-28 深圳市华宇讯科技有限公司 E-commerce platform commodity recommendation method and device, server and storage medium
CN111915020A (en) * 2020-08-12 2020-11-10 杭州海康威视数字技术股份有限公司 Method and device for updating detection model and storage medium
CN113095509A (en) * 2021-04-29 2021-07-09 百度在线网络技术(北京)有限公司 Updating method and device of online machine learning model
CN114692002A (en) * 2022-04-20 2022-07-01 网易(杭州)网络有限公司 Updating method and device of prediction model, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705717A (en) * 2019-09-30 2020-01-17 支付宝(杭州)信息技术有限公司 Training method, device and equipment of machine learning model executed by computer
CN111080413A (en) * 2019-12-20 2020-04-28 深圳市华宇讯科技有限公司 E-commerce platform commodity recommendation method and device, server and storage medium
CN111915020A (en) * 2020-08-12 2020-11-10 杭州海康威视数字技术股份有限公司 Method and device for updating detection model and storage medium
CN113095509A (en) * 2021-04-29 2021-07-09 百度在线网络技术(北京)有限公司 Updating method and device of online machine learning model
CN114692002A (en) * 2022-04-20 2022-07-01 网易(杭州)网络有限公司 Updating method and device of prediction model, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115952529A (en) * 2023-03-09 2023-04-11 北京云安腾宇科技有限公司 User data processing method, computing device and storage medium

Similar Documents

Publication Publication Date Title
CN110516745B (en) Training method and device of image recognition model and electronic equipment
CN109684510B (en) Video sequencing method and device, electronic equipment and storage medium
CN109543066B (en) Video recommendation method and device and computer-readable storage medium
CN109670077B (en) Video recommendation method and device and computer-readable storage medium
CN109670632B (en) Advertisement click rate estimation method, advertisement click rate estimation device, electronic device and storage medium
CN109165738B (en) Neural network model optimization method and device, electronic device and storage medium
CN109858614B (en) Neural network training method and device, electronic equipment and storage medium
CN111210844B (en) Method, device and equipment for determining speech emotion recognition model and storage medium
CN109543069B (en) Video recommendation method and device and computer-readable storage medium
US20220277204A1 (en) Model training method and apparatus for information recommendation, electronic device and medium
CN109272118B (en) Data training method, device, equipment and storage medium
CN109784537B (en) advertisement click rate estimation method and device, server and storage medium
CN115373718A (en) Updating method and device of online model and electronic equipment
CN112784151B (en) Method and related device for determining recommended information
CN111859097B (en) Data processing method, device, electronic equipment and storage medium
CN112308588A (en) Advertisement putting method and device and storage medium
CN110941727A (en) Resource recommendation method and device, electronic equipment and storage medium
CN114840761A (en) Push model training method, device, equipment, storage medium and program product
CN114268815A (en) Video quality determination method and device, electronic equipment and storage medium
CN111949808B (en) Multimedia content similarity determination method and device, electronic equipment and storage medium
CN112712385A (en) Advertisement recommendation method and device, electronic equipment and storage medium
CN114722238B (en) Video recommendation method and device, electronic equipment, storage medium and program product
CN113139560A (en) Training method and device of video processing model, and video processing method and device
CN110674416A (en) Game recommendation method and device
CN114254193B (en) Content recommendation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221122