CN116051962A - Model upgrading method and system and data processing method and system - Google Patents

Model upgrading method and system and data processing method and system Download PDF

Info

Publication number
CN116051962A
CN116051962A CN202310189047.6A CN202310189047A CN116051962A CN 116051962 A CN116051962 A CN 116051962A CN 202310189047 A CN202310189047 A CN 202310189047A CN 116051962 A CN116051962 A CN 116051962A
Authority
CN
China
Prior art keywords
model
data
service
training
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310189047.6A
Other languages
Chinese (zh)
Inventor
王丽芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aibee Beijing Intelligent Technology Co Ltd
Original Assignee
Beijing Aibee Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibee Technology Co Ltd filed Critical Beijing Aibee Technology Co Ltd
Priority to CN202310189047.6A priority Critical patent/CN116051962A/en
Publication of CN116051962A publication Critical patent/CN116051962A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Stored Programmes (AREA)

Abstract

The application discloses a model upgrading method and system and a data processing method and system. The model upgrading method is applied to a multi-service scene, and the initial models applied by the multi-service scene are the same; the method comprises the following steps: collecting first training data from business data processed by a model to be upgraded; preprocessing the first training data to obtain second training data; self-training at least one model to be trained by using second training data; testing at least one trained model to obtain a test result, and obtaining a target model according to the test result; and upgrading the model to be upgraded based on the target model. The whole process is free from manual intervention from the collection of training data to the output of the model, and the leakage of the data can be avoided. Meanwhile, as the training data is derived from the data in the corresponding service scene, the iteratively upgraded model can be better adapted to the service scene.

Description

Model upgrading method and system and data processing method and system
Technical Field
The present invention relates to the field of data processing, and in particular, to a method and system for model upgrade, a method and system for data processing, a computer device, and a storage medium.
Background
In many business scenarios, data is not allowed to flow out due to privacy protection requirements. Such as in airport scenes, face pictures that are overstocked and face pictures that are snap shot in the airport, these personal privacy related data are not allowed to be copied. Therefore, algorithm personnel cannot acquire data in a business scene to train a business model, and the training difficulty is increased.
Disclosure of Invention
Based on the above problems, the present application provides a method and system for upgrading a model, a method and system for processing data, a computer device and a storage medium, which can implement optimization of a service model without manual intervention.
The application discloses the following technical scheme:
the first aspect of the present application provides a model upgrade method, which is applied to a multi-service scene, wherein initial models applied by the multi-service scene are the same;
the method comprises the following steps:
collecting first training data from business data processed by a model to be upgraded; the processed service data comprises processed service information and processed result information obtained by processing the service information by a model to be upgraded;
preprocessing the first training data to obtain second training data;
self-training at least one model to be trained by using second training data;
testing at least one trained model to obtain a test result, and obtaining a target model according to the test result;
and upgrading the model to be upgraded based on the target model.
In one possible implementation manner, the preprocessing the first data to obtain the second training data includes:
and cleaning the first data, and preprocessing the cleaned first data to obtain second training data.
In one possible implementation manner, the updating the model to be updated based on the target model includes:
comparing the performance of the target service model and the model to be upgraded according to a preset rule;
and if the performance of the target service model is better than that of the model to be upgraded, replacing the model to be upgraded with the target service model.
In one possible implementation, the method further includes: the number of self-training models is determined based on the size of the storage space.
In one possible implementation, the method further includes: the self-training is triggered once every preset time or when the first training data reaches a preset amount.
A second aspect of the present application provides a data processing method, applied to a multi-service scenario, the method including:
receiving data to be processed;
processing the data to be processed by utilizing a service processing model corresponding to the data to be processed to obtain a processing result; the process model is upgraded using a model upgrade method as described in any one of the first aspects of the embodiments of the present application.
A third aspect of the present application provides a model upgrade system, which is applied to a multi-service system, where an initial model applied by the multi-service system is the same;
the model upgrade system comprises:
the training data acquisition module is used for acquiring first training data from service data processed by the model to be upgraded; the processed service data comprises processed service information and processed result information obtained by processing the service information by a model to be upgraded;
the training data processing module is used for preprocessing the first training data to obtain second training data;
the model self-training module is used for self-training at least one model to be trained by using the second training data;
the model test module is used for testing at least one trained model to obtain a test result, and obtaining a target model according to the test result;
and the upgrade online module is used for upgrading the model to be upgraded based on the target model.
A fourth aspect of the present application provides a data processing system comprising: a plurality of data processing subsystems and a model upgrade system as described in any one of the second aspects of embodiments of the present application;
the data processing subsystem is used for receiving data to be processed; processing the data to be processed by utilizing a service processing model corresponding to the data to be processed to obtain a processing result; the model upgrading system is used for upgrading the business processing model of each data processing subsystem.
A fifth aspect of the present application provides a computer device comprising: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the model upgrading method according to any one of the first aspect of the application or the data processing method according to the second aspect of the embodiment of the application when executing the computer program.
A sixth aspect of the present application provides a computer readable storage medium having instructions stored therein, which when run on a terminal device, cause the terminal device to perform the model upgrade method according to any one of the first aspect of the present application or the data processing method according to the second aspect of the present application.
Compared with the prior art, the application has the following beneficial effects:
the model upgrading method is applied to a multi-service scene, and initial models applied by the multi-service scene are the same; the method comprises the following steps: collecting first training data from business data processed by a model to be upgraded; preprocessing the first training data to obtain second training data; self-training at least one model to be trained by using second training data; testing at least one trained model to obtain a test result, and obtaining a target model according to the test result; and upgrading the model to be upgraded based on the target model. The whole process is free from manual intervention from the collection of training data to the output of the model, and the leakage of the data can be avoided. Meanwhile, as the training data is derived from the data in the corresponding service scene, the iteratively upgraded model can be better adapted to the service scene.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flowchart of a method for upgrading a model according to an embodiment of the present application;
fig. 2 is a schematic diagram of model upgrade of a multi-service scenario provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a business scenario model upgrading process provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a service scenario running process provided in an embodiment of the present application;
FIG. 5 is a block diagram of a model upgrade system according to an embodiment of the present application;
FIG. 6 is a block diagram of a data processing system according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the above objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures and detailed description are described in further detail below.
As mentioned above, in many business scenarios, data is not allowed to flow out due to privacy protection. For example, in an airport scene, the face picture subjected to security inspection and the face picture snapped in the airport are information related to personal privacy, and copying is not allowed. Therefore, algorithm personnel cannot get the data in the business scene, and the training difficulty is increased.
In addition, the model used in the business scene can be well fit into the business scene, but the generalization is not required to be good. However, it is heavy for the algorithm personnel to train a model for each scene, so that a general algorithm model is generally trained and can be adapted to each scene, but the model is very difficult to train because of the generalization.
In view of this, embodiments of the present application provide a model upgrade method and system, a data processing method and system, a computer device, and a storage medium.
Referring to fig. 1, a flowchart of a method for upgrading a model is provided in an embodiment of the present application. The method is applied to the multi-service scene, and the initial models of the multi-service scene application are the same.
As shown in fig. 1, a method for upgrading a model includes:
s110, collecting first training data from service data processed by a model to be upgraded; the processed service data comprises processed service information and processed result information obtained by processing the service information by a model to be upgraded;
in order to achieve the aim of iterative upgrading, the processed service information and the processed result information obtained by processing the service information of the model to be upgraded are utilized to carry out self-training, so that the model with better service processing performance is obtained, namely, the training data used in each self-training is more accurate than the processing result information in the training data used in the last self-training.
In one example, the processed business information is a face image, and the processed result information is a recognition result of the face image, a recognition time, a distance between the face and the image capturing device, and the like.
S120, preprocessing the first training data to obtain second training data;
s130, performing self-training on at least one model to be trained by using second training data;
s140, testing at least one trained model to obtain a test result, and obtaining a target model according to the test result;
and S150, upgrading the model to be upgraded based on the target model.
According to the embodiment of the application, the output of the model is collected from training data, the whole process is free from manual intervention, and data leakage can be avoided. Meanwhile, as the training data is derived from the data in the corresponding service scene, the iteratively upgraded model can be better adapted to the service scene.
Referring to fig. 2, the diagram is a model upgrade schematic diagram of a multi-service scenario provided in an embodiment of the present application. As shown in fig. 2, in the traffic scenario 1, the traffic scenario 2, and the traffic scenario 3, the traffic is processed using the same initial model. And upgrading the three initial models respectively in the using process. The initial model A0 in the service scene 1 is updated to a model B1, and is updated to C1 in the use process of B1. The initial model A0 in the service scene 2 is upgraded to a model B2, and is upgraded to C2 in the use process of B2. The initial model A0 in the service scene 3 is upgraded to a model B3, and is upgraded to C3 in the use process of B3.
Referring to fig. 3, a schematic diagram of a business scenario model upgrading process according to an embodiment of the present application is provided. As shown in fig. 3, taking a model upgrade process of the service scenario 1 as an example, the service data X1 is processed by using the initial model A0, so as to obtain processed data Y1. The data X1 and Y1 of the service scene 1 are used for model self-training to obtain a plurality of trained models B11, B12 and B13, the trained models B11, B12 and B13 are tested, a model with the best performance is selected as a target model B1 according to a test result, and if the performance of the target model B1 is better than that of a currently used service model A0, the model B1 is used for replacing A0. The service data X1 'is processed by the current service model B1 to obtain processed data Y1'. The data X1', Y1' of the service scene 1 are used for model self-training to obtain a plurality of trained models C11, C12 and C13, the trained models C11, C12 and C13 are tested, a model with the best performance is selected as a target model C1 according to a test result, and if the performance of the target model C1 is better than that of the currently used service model B1, the model C1 is used for replacing the model B1. And repeating the process to carry out iterative optimization on the current service model. The upgrade process of the business model of other scenes is similar to that of the business scene 1, and the description of this embodiment is omitted here.
Referring to fig. 4, a schematic diagram of a service scenario operation process provided in an embodiment of the present application is shown. As shown in fig. 4, service data processed by the current online model is collected, preprocessing operations such as cleaning are performed, model self-training is performed by using the preprocessed service data, a plurality of model models obtained by the self-training are tested, a target model with the best performance is obtained according to a test result, and the online model is updated based on the target model. The whole flow forms a good closed loop, and the service data flows into the training module, so that a model is generated, and the model can be connected to the service, so that the processing performance of the service data is improved. All the operations are free of manual intervention, so that the workload of algorithm personnel is greatly reduced, and the data can be improved in the whole system.
In one embodiment, the preprocessing the first data to obtain the second training data includes:
and cleaning the first data, and preprocessing the cleaned first data to obtain second training data.
In one example, data acquisition and cleaning times may be set. For example, to avoid affecting on-line traffic, daily data of the day of the on-line traffic is collected every early morning and a data cleansing module is performed. The data may be cleaned using a series of clustering algorithms. After washing, the data is inserted into a database of historical washing.
Since this data is to be used for training, some pre-processing of the data is required. For example, the data includes a face picture, and operations such as face detection, clipping, correction and the like need to be performed on the picture.
In one embodiment, the method further comprises: the self-training is triggered once every preset time or when the first training data reaches a preset amount.
The accumulated business data every day is maintained in a historical cleaning database, and model training can be triggered once at intervals or a certain data volume is added to trigger model training once.
In one embodiment, the upgrading the model to be upgraded based on the target model includes:
comparing the performance of the target service model and the model to be upgraded according to a preset rule;
and if the performance of the target service model is better than that of the model to be upgraded, replacing the model to be upgraded with the target service model.
For each business scenario, a corresponding test set needs to be established to evaluate the quality of the model. After the training task is finished, a model test is triggered, the model is firstly converted into a format required by the test, then all models generated by training are tested one by one in the service scene, and the model with the best performance on the test set is selected. The other models can be deleted, so that occupation of disk space is avoided.
Comparing the best model obtained by the current training with the performance of the online model, and if the performance is superior to that of the online model, upgrading the online model is needed.
In one example, some preparation work such as conversion to a model format, encryption, model compression, configuration files, databases, etc. is required before an upgrade.
In one embodiment, the method further comprises: the number of self-training models is determined based on the size of the storage space.
Some parameters in the training process are set automatically, such as the batch size of one training, can be calculated according to the use condition of the video memory, and are not required to be set manually, so that the overflow of the video memory is avoided. There are also some fault tolerant mechanisms, such as failure of the training due to machine or some other reason, the program can continue training following the previous training, etc.
The embodiment of the application provides a data processing method, which is applied to a multi-service scene, and comprises the following steps:
receiving data to be processed;
processing the data to be processed by utilizing a service processing model corresponding to the data to be processed to obtain a processing result; the process model is upgraded using a model upgrade method as described in any one of the first aspects of the embodiments of the present application.
Referring to fig. 5, a block diagram of a model upgrade system is provided in an embodiment of the present application. The method is applied to a multi-service system, and the initial models applied by the multi-service system are the same. As shown in figure 5 of the drawings,
the model upgrade system comprises:
the training data acquisition module 510 is configured to acquire first training data from service data processed by a model to be upgraded;
the training data processing module 520 is configured to pre-process the first training data to obtain second training data;
a model self-training module 530, configured to perform self-training on at least one model to be trained using the second training data;
the model testing module 540 is configured to test at least one trained model to obtain a test result, and obtain a target model according to the test result;
an upgrade online module 550, configured to upgrade a model to be upgraded based on the target model.
In one embodiment, the training data processing module is specifically configured to:
and cleaning the first data, and preprocessing the cleaned first data to obtain second training data.
In one embodiment, the upgrade online module is specifically configured to: comparing the performance of the target service model and the model to be upgraded according to a preset rule; and if the performance of the target service model is better than that of the model to be upgraded, replacing the model to be upgraded with the target service model.
In one embodiment, the model self-training module is further configured to: the number of self-training models is determined based on the size of the storage space.
The embodiment of the application provides a data processing method, which is applied to a multi-service scene, and comprises the following steps:
receiving data to be processed;
processing the data to be processed by utilizing a service processing model corresponding to the data to be processed to obtain a processing result; the process model is upgraded using a model upgrade method as described in any one of the first aspects of the embodiments of the present application.
With reference now to FIG. 6, a block diagram of a data processing system is presented in accordance with an embodiment of the present application. As shown in fig. 6, the data processing system includes: a plurality of data processing subsystems (three are taken as examples in the figure, namely a data processing subsystem 1, a data processing subsystem 2 and a data processing subsystem 3) and a model upgrading system in the embodiment of the application;
the data processing subsystem is used for receiving data to be processed; processing the data to be processed by utilizing a service processing model corresponding to the data to be processed to obtain a processing result; the model upgrading system is used for upgrading the business processing model of each data processing subsystem.
The embodiment of the application provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the model upgrade method described in the embodiment of the application.
It should be noted that computer program code for carrying out operations of the present application may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Referring to fig. 7, a schematic diagram of an electronic device 700 suitable for use in implementing the model upgrade method shown in fig. 1 is shown. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processor, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 701.
It should be noted that the computer readable medium of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal that propagates in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
It should be noted that the term "comprising" and variants thereof as used herein is open ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like herein are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by such devices, modules, or units.
It is to be understood that, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
While several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present application. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the disclosure. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.

Claims (10)

1. The model upgrading method is applied to a multi-service scene and is characterized in that initial models applied to the multi-service scene are the same;
the method comprises the following steps:
collecting first training data from business data processed by a model to be upgraded; the processed service data comprises processed service information and processed result information obtained by processing the service information by a model to be upgraded;
preprocessing the first training data to obtain second training data;
self-training at least one model to be trained by using second training data;
testing at least one trained model to obtain a test result, and obtaining a target model according to the test result;
and upgrading the model to be upgraded based on the target model.
2. The method of claim 1, wherein preprocessing the first data to obtain the second training data comprises:
and cleaning the first data, and preprocessing the cleaned first data to obtain second training data.
3. The method of claim 1, wherein the upgrading the model to be upgraded based on the target model comprises:
comparing the performance of the target service model and the model to be upgraded according to a preset rule;
and if the performance of the target service model is better than that of the model to be upgraded, replacing the model to be upgraded with the target service model.
4. The method according to claim 1, wherein the method further comprises: the number of self-training models is determined based on the size of the storage space.
5. The method according to claim 1, wherein the method further comprises: the self-training is triggered every preset time, or when the new increment of the first training data in the database reaches a preset amount.
6. A data processing method applied to a multi-service scenario, the method comprising:
receiving data to be processed;
processing the data to be processed by utilizing a service processing model corresponding to the data to be processed to obtain a processing result; the business process model is upgraded using the model upgrade method according to any one of claims 1-4.
7. A model upgrade system applied to a multi-service system, characterized in that the initial models applied by the multi-service system are the same;
the model upgrade system comprises:
the training data acquisition module is used for acquiring first training data from service data processed by the model to be upgraded; the processed service data comprises processed service information and processed result information obtained by processing the service information by a model to be upgraded;
the training data processing module is used for preprocessing the first training data to obtain second training data;
the model self-training module is used for self-training at least one model to be trained by using the second training data;
the model test module is used for testing at least one trained model to obtain a test result, and obtaining a target model according to the test result;
and the upgrade online module is used for upgrading the model to be upgraded based on the target model.
8. A data processing system, comprising: a plurality of data processing subsystems and the model upgrade system of claim 7;
the data processing subsystem is used for receiving data to be processed; processing the data to be processed by utilizing a service processing model corresponding to the data to be processed to obtain a processing result; the model upgrading system is used for upgrading the business processing model of each data processing subsystem.
9. A computer device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the model upgrade method according to any one of claims 1-5 or the data processing method according to claim 6 when the computer program is executed.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein instructions, which when run on a terminal device, cause the terminal device to perform the model upgrade method according to any one of claims 1-5 or the data processing method according to claim 6.
CN202310189047.6A 2023-02-22 2023-02-22 Model upgrading method and system and data processing method and system Pending CN116051962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310189047.6A CN116051962A (en) 2023-02-22 2023-02-22 Model upgrading method and system and data processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310189047.6A CN116051962A (en) 2023-02-22 2023-02-22 Model upgrading method and system and data processing method and system

Publications (1)

Publication Number Publication Date
CN116051962A true CN116051962A (en) 2023-05-02

Family

ID=86125725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310189047.6A Pending CN116051962A (en) 2023-02-22 2023-02-22 Model upgrading method and system and data processing method and system

Country Status (1)

Country Link
CN (1) CN116051962A (en)

Similar Documents

Publication Publication Date Title
CN108427939B (en) Model generation method and device
CN108520220B (en) Model generation method and device
CN107622240B (en) Face detection method and device
WO2020000879A1 (en) Image recognition method and apparatus
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN111582090A (en) Face recognition method and device and electronic equipment
CN108509921B (en) Method and apparatus for generating information
KR20200018411A (en) Method and apparatus for detecting burr of electrode piece
CN109902446B (en) Method and apparatus for generating information prediction model
CN110347875B (en) Video scene classification method and device, mobile terminal and storage medium
CN115272182B (en) Lane line detection method, lane line detection device, electronic equipment and computer readable medium
CN113449773A (en) Model updating method and device, storage medium and electronic equipment
CN113869599A (en) Fish epidemic disease development prediction method, system, equipment and medium
CN116051962A (en) Model upgrading method and system and data processing method and system
CN115546487A (en) Image model training method, device, medium and electronic equipment
CN112990017B (en) Smart city big data analysis method and monitoring system
US11681920B2 (en) Method and apparatus for compressing deep learning model
CN110033413B (en) Image processing method, device, equipment and computer readable medium of client
CN113190835A (en) Application program violation detection method, device, equipment and storage medium
CN112418233A (en) Image processing method, image processing device, readable medium and electronic equipment
CN111160197A (en) Face detection method and device, electronic equipment and storage medium
CN116384945B (en) Project management method and system
CN111753574A (en) Throw area positioning method, device, equipment and storage medium
CN115546677B (en) Method, apparatus, device and computer readable medium for processing information of construction site
CN116152233B (en) Image processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231024

Address after: Room 1201, 12 / F, building 1, zone 2, No. 81, Beiqing Road, Haidian District, Beijing 100094

Applicant after: AIBEE (BEIJING) INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 100094 room 1202, 12 / F, 13 / F, building 1, zone 2, 81 Beiqing Road, Haidian District, Beijing

Applicant before: Beijing Aibi Technology Co.,Ltd.

TA01 Transfer of patent application right