CN115359341A - Model updating method, device, equipment and medium - Google Patents

Model updating method, device, equipment and medium Download PDF

Info

Publication number
CN115359341A
CN115359341A CN202210998311.6A CN202210998311A CN115359341A CN 115359341 A CN115359341 A CN 115359341A CN 202210998311 A CN202210998311 A CN 202210998311A CN 115359341 A CN115359341 A CN 115359341A
Authority
CN
China
Prior art keywords
model
training
preset
data
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210998311.6A
Other languages
Chinese (zh)
Other versions
CN115359341B (en
Inventor
鲁斌
梁艳菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Internet Of Things Innovation Center Co ltd
Original Assignee
Wuxi Internet Of Things Innovation Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Internet Of Things Innovation Center Co ltd filed Critical Wuxi Internet Of Things Innovation Center Co ltd
Priority to CN202210998311.6A priority Critical patent/CN115359341B/en
Publication of CN115359341A publication Critical patent/CN115359341A/en
Application granted granted Critical
Publication of CN115359341B publication Critical patent/CN115359341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a model updating method, a model updating device, model updating equipment and a model updating medium, which relate to the field of artificial intelligence, wherein the method comprises the following steps: collecting target video data in a preset data collection period, and determining an effective picture from the target video data by using a preset data analysis method; based on the interactive scene of the target object in the effective picture, adding annotation data to the target object by using a preset weighted average method; inputting the effective pictures with the labeled data into a preset pre-training model for training, and determining whether the post-training model is superior to the pre-training model by using a preset judgment rule; if the model is better than the model, performing online test on the trained model, and determining the credibility of the model based on the negative feedback rate; and when the credibility is greater than a preset credibility threshold value, taking the credibility as an operation model, and setting the model before training to be in a backup state. The invention can analyze the frequency of external negative feedback to automatically start the model updating, only needs simple judgment type manual interaction for event response, and reduces the maintenance complexity.

Description

Model updating method, device, equipment and medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a model updating method, a model updating device, model updating equipment and a model updating medium.
Background
The vision sensor is used for collecting data, and some batched monitoring tasks are completed through the artificial intelligence model, so that convenience is provided for automation, and the vision sensor is a trend of artificial intelligence application. However, in the process of deployment and landing in the industry, most of the existing update strategies after model deployment adopt a process of updating to a deployment environment after offline training, and the main disadvantages are that much thought participation is needed, and meanwhile, a participator needs to have certain professional knowledge, which requires long-term field maintenance after deployment, which is obviously unrealistic. In the data recovery stage of a scene, a cloud terminal regular collection and data updating strategy is often adopted in many systems and schemes, however, the strategy brings hidden dangers to the confidentiality of some sensitive scene data, and even under specific conditions, the strategy is not feasible.
In order to solve the problems, data backflow and model enhancement at regular time and regular time are often adopted in the field of scene monitoring intelligence, the strategy is simple and effective, and the problems can be solved at low cost.
Therefore, in the scene monitoring intellectualization process, how to avoid the situation that the model cannot automatically adapt to the scene due to regular data backflow and model enhancement at regular time and the situation that the data validity judgment cannot be made in the data collection process is the problem to be solved in the field.
Disclosure of Invention
In view of this, an object of the present invention is to provide a method, an apparatus, a device, and a medium for model update, which can analyze the frequency of external negative feedback to automatically start model update, and only need simple judgment-type interaction for event response in terms of manual interaction, and the related speciality is weak, and the maintenance complexity can be reduced. The specific scheme is as follows:
in a first aspect, the present application discloses a model updating method, including:
collecting target video data in a preset data collection period, and determining an effective picture from the target video data by using a preset data analysis method;
based on the interactive scene of the target object in the effective picture, adding marking data to the target object in the effective picture by using a preset weighted average method;
inputting the effective picture with the labeled data as a new sample picture into a preset pre-training model for training to obtain a post-training model;
determining whether the trained model is superior to the pre-training model by using a preset judgment rule;
if the model after training is superior to the model before training, performing online test on the model after training, and determining the reliability of the model after training based on the negative feedback rate of the model after training in the online test process;
and when the reliability is greater than a preset reliability threshold value, taking the trained model as an operation model, and setting the model before training as a backup state.
Optionally, the determining an effective picture from the target video data by using a preset data analysis method includes:
determining a target picture from the target video data by using a preset frame interval, extracting a background in the target picture by using a frame difference method, and then performing background modeling by using a preset Gaussian mixture model;
determining a background binary image corresponding to the target image, and determining the intersection ratio of the target image;
and if the intersection ratio of the target picture is smaller than a preset first threshold value, determining the target picture as an effective picture.
Optionally, the adding, by using a preset weighted average method, annotation data to the target object in the effective picture based on the interactive scene of the target object in the effective picture includes:
determining an interaction scene of a target object in the effective picture;
if the target object in the effective picture is in a continuous frame interactive scene, determining the tracking position of the target object in the last frame of the effective picture by using a preset first weighted average method based on the continuous frame tracking position in the effective picture, and adding labeling data to the target object in the effective picture based on the continuous frame tracking position and the tracking position of the last frame;
if the target object in the effective picture is in a sudden change interaction scene, a standard marking position is obtained, a marking position of the target object in the last frame of the effective picture is determined by a preset second weighted average method based on the standard marking position and a tracking position of a continuous frame in the effective picture, and then marking data are added to the target object in the effective picture based on the tracking position of the continuous frame and the tracking position of the last frame.
Optionally, the determining an interaction scene of a target object in the effective picture includes:
determining pictures with difference coefficients meeting a preset second threshold value in the effective pictures by utilizing a preset data tracking algorithm;
and determining the pictures with the difference coefficients smaller than a preset second threshold value as continuous frame interactive scenes, and determining the pictures with the difference coefficients not smaller than the preset second threshold value as abrupt change interactive scenes.
Optionally, the obtaining a standard annotation position includes:
determining the effective picture in the mutation interaction scene as a target effective picture and sending the target effective picture to a preset labeling data receiving interface;
and receiving the marking data received by the preset marking data receiving interface, and determining the standard marking position of the target effective picture based on the marking data.
Optionally, the determining, by using a preset determination rule, whether the trained model is better than the pre-training model includes:
determining a historical data set, and adding the effective picture with the labeled data into the historical data set as a newly added sample picture to determine a current data set; the historical data set comprises a historical training set and a historical test set which are divided based on a preset first division ratio;
dividing the current data set into a current test set and a current training set by using a preset second division ratio, and testing the pre-training model and the post-training model by using the historical test set and the current test set respectively so as to obtain test results corresponding to the pre-training model and the post-training model respectively;
and judging whether the model after training is superior to the model before training based on the test result.
Optionally, the obtaining the test results corresponding to the pre-training model and the post-training model respectively includes:
respectively obtaining historical test average precision and current test average precision of the model before training and the model after training;
correspondingly, the determining whether the model after training is better than the model before training based on the test result includes:
and judging whether the historical test average precision and the current test average precision of the trained model are both greater than the historical test average precision and the current test average precision of the model before training.
In a second aspect, the present application discloses a model updating apparatus, comprising:
the data analysis module is used for collecting target video data in a preset data collection period and determining effective pictures from the target video data by using a preset data analysis method;
the data labeling module is used for adding labeling data to the target object in the effective picture by utilizing a preset weighted average method based on the interactive scene of the target object in the effective picture;
the model training module is used for inputting the effective pictures with the labeled data as new sample pictures into a preset pre-training model for training so as to obtain a post-training model;
the model judging module is used for determining whether the trained model is superior to the pre-trained model by utilizing a preset judging rule;
the model testing module is used for carrying out online testing on the trained model if the trained model is superior to the pre-trained model and determining the credibility of the trained model based on the negative feedback rate of the trained model in the online testing process;
and the model updating module is used for taking the trained model as an operation model and setting the model before training as a backup state when the credibility is greater than a preset credibility threshold.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the aforementioned model updating method.
In a fourth aspect, the present application discloses a computer storage medium for storing a computer program; wherein the computer program when executed by a processor implements the steps of the model updating method disclosed in the foregoing.
The method comprises the steps of firstly, collecting target video data in a preset data collection period, and determining effective pictures from the target video data by using a preset data analysis method; based on the interactive scene of the target object in the effective picture, adding marking data to the target object in the effective picture by using a preset weighted average method; inputting the effective picture with the labeled data as a new sample picture into a preset pre-training model for training to obtain a post-training model; determining whether the trained model is superior to the pre-training model by using a preset judgment rule; if the model after training is superior to the model before training, performing online test on the model after training, and determining the reliability of the model after training based on the negative feedback rate of the model after training in the online test process; and when the reliability is greater than a preset reliability threshold value, taking the trained model as an operation model, and setting the model before training as a backup state. By the method provided by the embodiment, after data screening analysis and data labeling are carried out on target video data, the model is trained by using newly added sample data, and the model after training and the model before training are used for comparison test, so that the model after training is superior to the model before training and the model after training is used as an operation model when online conditions are met. Therefore, the automatic sample labeling scheme with less manual intervention can be adopted for labeling samples in different interactive scenes, the operation difficulty is reduced in the whole scheme, and the later maintenance of the model is facilitated. In addition, the method can automatically manage data updating and model updating, can effectively monitor the iteration of the model, is convenient for backtracking, and can ensure the adaptability of the model to a new scene and enhance the usability of the model.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a model updating method provided by the present application;
FIG. 2 is a flowchart of a specific model updating method provided in the present application;
FIG. 3 is a flow chart of a frame difference method provided by the present application;
FIG. 4 is a functional block diagram provided herein;
FIG. 5 is a schematic diagram of a model updating apparatus according to the present application;
fig. 6 is a block diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, in the intelligent scene monitoring process, the situation that the model cannot automatically adapt to the scene due to regular data backflow and model enhancement at regular time and regular time can occur, and data validity judgment cannot be made in the data collection process. In the application, the model can be automatically started by analyzing the frequency of external negative feedback, and only simple judgment type interaction on event response is needed in the aspect of manual interaction, so that the related speciality is weak, and the maintenance complexity can be reduced.
The embodiment of the invention discloses a model updating method, which is described with reference to fig. 1 and comprises the following steps:
step S11: target video data are collected in a preset data collection period, and effective pictures are determined from the target video data by using a preset data analysis method.
In some specific embodiments, a frame interval may be preset in the target video data to determine pictures, and the pictures may be preprocessed by a preset data analysis method to determine valid pictures. In some preferred embodiments, in order to reduce data redundancy while avoiding the loss of the emergency related information, the preset frame interval generally adopts a 5-frame interval.
Step S12: and adding annotation data for the target object in the effective picture by using a preset weighted average method based on the interactive scene of the target object in the effective picture.
In this embodiment, the adding, based on the interactive scene of the target object in the effective picture and by using a preset weighted average method, the annotation data to the target object in the effective picture may include: determining an interaction scene of a target object in the effective picture; if the target object in the effective picture is in a continuous frame interactive scene, determining the tracking position of the target object in the last frame of the effective picture by using a preset first weighted average method based on the continuous frame tracking position in the effective picture, and adding labeling data to the target object in the effective picture based on the continuous frame tracking position and the tracking position of the last frame; if the target object in the effective picture is in a sudden change interaction scene, a standard marking position is obtained, a marking position of the target object in the last frame of the effective picture is determined by a preset second weighted average method based on the standard marking position and a tracking position of a continuous frame in the effective picture, and then marking data are added to the target object in the effective picture based on the tracking position of the continuous frame and the tracking position of the last frame.
In this embodiment, the determining an interaction scene of a target object in the effective picture may include: determining pictures with difference coefficients meeting a preset second threshold value in the effective pictures by utilizing a preset data tracking algorithm; and determining the pictures with the difference coefficients smaller than a preset second threshold value as continuous frame interactive scenes, and determining the pictures with the difference coefficients not smaller than the preset second threshold value as abrupt change interactive scenes. In this embodiment, the difference coefficient Δ may be calculated according to a formula
Figure BDA0003806530240000061
Calculating, wherein N is the number of effective pictures and can be set according to specific scenes, t is the time, and IoU t The cross-over ratio is the cross-over ratio of the pictures corresponding to the time t (i.e. the Intersection overlap). In a preferred embodiment, the cross-over ratio threshold in the formula may be set to 0.3.
Specifically, in the continuous frame interaction scenario in this embodiment, the position bbox of the last frame in the tracking result of the continuous frame avg A first weighted average formula may be employed
Figure BDA0003806530240000071
Determining; under the scene of abrupt change interaction, if the result of manual labeling is
Figure BDA0003806530240000072
Then a plurality of tracking results bbox adjacent to the frame t Will use the formula
Figure BDA0003806530240000073
The last frame position is obtained. Wherein, bbox t And c is the corresponding picture sequence number under the abrupt change scene in the effective picture as a historical tracking result.
In this embodiment, the obtaining the standard labeling position may include: determining the effective picture in the mutation interaction scene as a target effective picture and sending the target effective picture to a preset labeling data receiving interface; and receiving the marking data received by the preset marking data receiving interface, and determining the standard marking position of the target effective picture based on the marking data.
In the step, a preset data tracking algorithm is mainly relied on, and a new target position can be automatically obtained for continuous picture data with larger difference between the background and the historical value under a continuous frame interaction scene, so that a target with partial change and continuity is marked; for data obtained in a part of mutation scenes, namely, corresponding pictures can be fed back to a preset labeling data receiving interface in a mutation interaction scene, so that after manually labeled labeling data are obtained in a man-machine interaction mode, a labeling position is determined based on a small amount of labeling results fed back by the preset labeling data receiving interface.
Step S13: and inputting the effective picture with the labeled data as a new sample picture into a preset pre-training model for training so as to obtain a post-training model.
In this embodiment, data is collected according to the preset data collection period in step S11, the preset data collection period is used as a model training period, and after each period is finished, a newly added sample picture collected in the current period is input into the current preset pre-training model for training, so as to generate a post-training model.
Step S14: and determining whether the trained model is superior to the pre-trained model by using a preset judgment rule.
In this embodiment, the determining whether the trained model is better than the pre-trained model by using the preset determination rule may include: determining a historical data set, and adding the effective picture with the labeled data into the historical data set as a newly added sample picture to determine a current data set; the historical data set comprises a historical training set and a historical test set which are divided based on a preset first division ratio; dividing the current data set into a current test set and a current training set by using a preset second division ratio, and testing the pre-training model and the post-training model by using the historical test set and the current test set respectively so as to obtain test results corresponding to the pre-training model and the post-training model respectively; and judging whether the model after training is superior to the model before training based on the test result. It can be understood that, in this embodiment, after the trained model B is obtained, the trained model B may be tested on the historical test set a, and a result B _ a is obtained, meanwhile, a result B _ B is obtained by testing on the new current test set B, the model a before training is tested on the historical test set a and the current test set B, and a _ a and a _ B are obtained, indexes of B _ a and a _ a and indexes of B _ B and a _ a are compared, when B _ a is better than a _ a and B _ B is also better than a _ a, the model B is accepted, otherwise, the data reflow stage of S11 is returned again, the ratio of the new sample and the old sample in the training set is adjusted, and the model is retrained.
In this embodiment, the obtaining the test results corresponding to the pre-training model and the post-training model respectively may include: respectively obtaining historical test average precision and current test average precision of the model before training and the model after training; correspondingly, the determining whether the model after training is better than the model before training based on the test result includes: and judging whether the historical test average precision and the current test average precision of the trained model are both greater than the historical test average precision and the current test average precision of the model before training. It is understood that, in a specific embodiment, taking a target detection task as an example, the model may be mainly used to detect the position, the type, and the number of target objects in the test set, and obtain an mAP (i.e. average accuracy) value, and a result of measuring the model quality is obtained according to the mAP value, where the accuracy is
Figure BDA0003806530240000081
Recall rate
Figure BDA0003806530240000082
Wherein P (R) is a curve of P-R, mAP is an average value of AP in categories, TP represents positive sample identification rate, FP represents negative sample non-identification rate, and FN represents negative sample identification rate.
In this embodiment, if the average accuracy of the model after training is better than that of the model before training, the method further includes: performing backup storage on the historical data set; and taking the newly added sample picture as a current historical data set, and dividing the current historical data set into a current historical training set and a current historical test set based on the preset first division ratio. That is, update: if the trained model is accepted, taking the new training set as a historical training set, and taking the previous historical training set as a backup training set for storage; the new test set and the historical test set are randomly sampled according to a preset first dividing ratio to form a new test set. In some preferred embodiments, the preset first partition proportion may be a new test set and a historical test set 2.
Step S15: and if the post-training model is superior to the pre-training model, performing online test on the post-training model, and determining the reliability of the post-training model based on the negative feedback rate of the post-training model in the online test process.
Step S16: and when the reliability is greater than a preset reliability threshold value, taking the trained model as an operation model, and setting the model before training as a backup state.
In this embodiment, after the trained model B is received, the trained model B enters an online test stage, during which the reliability of the model B is obtained according to a negative feedback rate, and when the reliability exceeds a threshold, the model B replaces the model a, and formally operates online, and the model a enters a backup state.
In the embodiment, target video data are collected in a preset data collection period, and an effective picture is determined from the target video data by using a preset data analysis method; adding annotation data to the target object in the effective picture by using a preset weighted average method based on the interactive scene of the target object in the effective picture; inputting the effective picture with the labeled data as a new sample picture into a preset pre-training model for training to obtain a post-training model; determining whether the trained model is superior to the pre-trained model by using a preset determination rule; if the model after training is superior to the model before training, performing online test on the model after training, and determining the reliability of the model after training based on the negative feedback rate of the model after training in the online test process; and when the reliability is greater than a preset reliability threshold value, taking the trained model as an operation model, and setting the model before training as a backup state. By the method provided by the embodiment, after data screening analysis and data labeling are carried out on target video data, the model is trained by using newly added sample data, and the model after training and the model before training are used for comparison test, so that the model after training is superior to the model before training and the model after training is used as an operation model when online conditions are met. Therefore, the method can adopt an automatic sample labeling scheme with less manual intervention, automatically label the sample in a continuous frame interaction scene, and determine the sample labeling by means of a manual auxiliary labeling mode when the tracking algorithm cannot position the target object in a sudden change interaction scene, so that the operation difficulty is reduced in the whole scheme, and the later maintenance of the model is facilitated. In addition, the method can automatically manage data updating and model updating, can effectively monitor the iteration of the model, is convenient for backtracking, and can ensure the adaptability of the model to a new scene and enhance the usability of the model.
Fig. 2 is a flowchart of a specific data analysis method provided in an embodiment of the present application. Referring to fig. 2, the method includes:
step S21: and determining a target picture from the target video data by using a preset frame interval, extracting a background in the target picture by using a frame difference method, and then performing background modeling by using a preset Gaussian mixture model.
In this embodiment, after the target video data is obtained, a picture may be determined by presetting a frame interval in the target video data, and in some preferred embodiments, in order to reduce data redundancy and avoid loss of information related to an emergency, the preset frame interval generally adopts a 5 frame interval.
And then, preprocessing the pictures, wherein the purpose of preprocessing is to mainly filter the pictures with undersized images and wrong image formats, and meanwhile, the content loss caused by the abnormal decoding of the pictures is eliminated through an algorithm with small background difference of continuous frames. In a specific implementation manner, the background may be extracted based on a frame difference method, and fig. 3 is a flow chart of the frame difference method proposed in this embodiment, that is, a picture of a preset frame is determined first, then pixels of the picture are subtracted to remove an absolute value, and after the picture is locally clustered, the picture background may be extracted.
And next, obtaining background pictures of continuous frame pictures in a certain time period by adopting a background modeling mode, and classifying the samples by comparing background difference values. In a preferred embodiment, a Gaussian mixture model can be selected, the basic formula and steps of which are as follows:
|X ti,t-1 |≤2.5σ i,t-1 wherein X is t Is the pixel value, mu i,t -1 and σ i,t -1 represents the mean and variance of the ith Gaussian model, and pixel X is assigned only if the above inequality is met, and if the ith Gaussian model meets the background requirement t As background;
the model update formula is: w is a k,t =(1-α)*w k,t-1 +αM k,t Wherein M is k,t And alpha values are {0,1};
the updated formula of the mean and variance of the gaussian model on the match is:
ρ=α*η(X tkk );
μ t =(1-ρ)*μ t-1 +ρ*X t
Figure BDA0003806530240000101
step S22: and determining a background binary image corresponding to the target image, and determining the intersection ratio of the target image.
In this embodiment, after background modeling is completed, the video frame I is counted t And adjacent frame I t-1 Background binary image M (I) t ) And M (I) t-1 ):
Figure BDA0003806530240000102
Threshold value adopts intersection ratio:
Figure BDA0003806530240000103
step S23: and if the intersection ratio of the target picture is smaller than a preset first threshold value, determining the target picture as an effective picture.
In a preferred embodiment, the threshold after the natural object motion is 0.4 when the IoU is in motion t And when the frame of picture t is smaller than the preset threshold value, the frame of picture t is an effective picture.
The embodiment provides a data analysis method, namely after target video data are obtained, pictures are selected from the target video data at preset frame intervals, after the pictures are preprocessed, effective pictures are screened from all the pictures in a mode of sequentially using a frame difference method, gaussian background modeling and background binary image generation, and support of the effective pictures can be provided for subsequent data marking.
As shown in fig. 4, which is a schematic block diagram provided by the present application, the whole process of the present invention includes two stages: data reflow and model enhancement, in contrast to the upper graph, are two dashed box portions of the upper graph, respectively. The data reflux stage is arranged in the dotted frame of the upper half part in the figure, firstly, after data are collected and preprocessed, data screening is carried out on the data (the data screening process is divided into a data analysis stage and a data labeling stage) so as to determine a newly added sample (namely the effective picture), after the newly added sample is determined, a model enhancement stage is entered, firstly, the effective picture is utilized to train the model, and a comparison test is carried out on the model before training and the model after training, and a test set, a training set and the model are updated based on a comparison test result.
Referring to fig. 5, the embodiment of the present application discloses a model updating apparatus, which may specifically include:
the data analysis module 11 is configured to collect target video data in a preset data collection period, and determine an effective picture from the target video data by using a preset data analysis method;
the data labeling module 12 is configured to add labeling data to the target object in the effective picture by using a preset weighted average method based on the interactive scene of the target object in the effective picture;
the model training module 13 is configured to input the effective picture with the labeled data as a new sample picture into a preset pre-training model for training to obtain a post-training model;
a model decision module 14, configured to determine whether the trained model is better than the pre-training model by using a preset decision rule;
the model testing module 15 is used for performing online testing on the trained model if the trained model is superior to the pre-trained model, and determining the credibility of the trained model based on the negative feedback rate of the trained model in the online testing process;
and the model updating module 16 is configured to, when the reliability is greater than a preset reliability threshold, use the trained model as an operating model, and set the model before training in a backup state.
The device collects target video data in a preset data collection period, and determines effective pictures from the target video data by using a preset data analysis method; adding annotation data to the target object in the effective picture by using a preset weighted average method based on the interactive scene of the target object in the effective picture; inputting the effective picture with the labeled data as a new sample picture into a preset pre-training model for training to obtain a post-training model; determining whether the trained model is superior to the pre-training model by using a preset judgment rule; if the model after training is superior to the model before training, performing online test on the model after training, and determining the reliability of the model after training based on the negative feedback rate of the model after training in the online test process; and when the reliability is greater than a preset reliability threshold value, taking the trained model as an operation model, and setting the model before training as a backup state. By the method provided by the embodiment, after data screening analysis and data labeling are carried out on target video data, the model is trained by using newly-added sample data, and the model after training and the model before training are used for comparison test, so that the model after training is superior to the model before training and the model after training is used as an operation model when online conditions are met. Therefore, the method can adopt an automatic sample labeling scheme with less manual intervention, automatically label the sample in a continuous frame interaction scene, and determine the sample labeling by means of manual auxiliary labeling when the tracking algorithm cannot position the target object in a sudden change interaction scene, so that the operation difficulty is reduced in the whole scheme, and the later maintenance of the model is facilitated. In addition, the method can automatically manage data updating and model updating, can effectively monitor the iteration of the model, is convenient for backtracking, and can ensure the adaptability of the model to a new scene and enhance the usability of the model.
Further, an electronic device is also disclosed in the embodiments of the present application, and fig. 6 is a block diagram of the electronic device 20 shown in the exemplary embodiments, and the contents in the diagram cannot be considered as any limitation to the scope of the application.
Fig. 6 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a display 24, an input-output interface 25, a communication interface 26, and a communication bus 27. Wherein the memory 22 is used for storing a computer program, which is loaded and executed by the processor 21 to implement the relevant steps in the model updating method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 26 can create a data transmission channel between the electronic device 20 and an external device, and the communication protocol followed by the communication interface is any communication protocol that can be applied to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the storage 22 is used as a carrier for storing resources, and may be a read-only memory, a random access memory, a magnetic disk, an optical disk, or the like, the resources stored thereon may include an operating system 221, a computer program 222, virtual machine data 223, and the like, and the virtual machine data 223 may include various data. The storage means may be transient storage or permanent storage.
The operating system 221 is used for managing and controlling each hardware device on the electronic device 20 and the computer program 222, and may be Windows Server, netware, unix, linux, or the like. The computer programs 222 may further include computer programs that can be used to perform other specific tasks in addition to the computer programs that can be used to perform the model updating method disclosed by any of the foregoing embodiments and executed by the electronic device 20.
Further, the present application discloses a computer-readable storage medium, wherein the computer-readable storage medium includes a Random Access Memory (RAM), a Memory, a Read-Only Memory (ROM), an electrically programmable ROM, an electrically erasable programmable ROM, a register, a hard disk, a magnetic disk, or an optical disk or any other form of storage medium known in the art. Wherein the computer program when executed by a processor implements the model updating method disclosed in the foregoing. For the specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, which are not described herein again.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description. Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above detailed description is provided for the model updating method, apparatus, device and storage medium provided by the present invention, and a specific example is applied in this document to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A model update method, comprising:
collecting target video data in a preset data collection period, and determining an effective picture from the target video data by using a preset data analysis method;
adding annotation data to the target object in the effective picture by using a preset weighted average method based on the interactive scene of the target object in the effective picture;
inputting the effective picture with the labeled data as a new sample picture into a preset pre-training model for training to obtain a post-training model;
determining whether the trained model is superior to the pre-training model by using a preset judgment rule;
if the model after training is superior to the model before training, performing online test on the model after training, and determining the reliability of the model after training based on the negative feedback rate of the model after training in the online test process;
and when the reliability is greater than a preset reliability threshold value, taking the trained model as an operation model, and setting the model before training as a backup state.
2. The model updating method of claim 1, wherein said determining the valid picture from the target video data by using the predetermined data analysis method comprises:
determining a target picture from the target video data by using a preset frame interval, extracting a background in the target picture by using a frame difference method, and then performing background modeling by using a preset Gaussian mixture model;
determining a background binary image corresponding to the target image, and determining the intersection ratio of the target image;
and if the intersection ratio of the target picture is smaller than a preset first threshold value, determining the target picture as an effective picture.
3. The model updating method according to claim 1, wherein the adding annotation data to the target object in the active picture by using a preset weighted average method based on the interactive scene of the target object in the active picture comprises:
determining an interaction scene of a target object in the effective picture;
if the target object in the effective picture is in a continuous frame interactive scene, determining the tracking position of the target object in the last frame of the effective picture by using a preset first weighted average method based on the continuous frame tracking position in the effective picture, and adding labeling data to the target object in the effective picture based on the continuous frame tracking position and the tracking position of the last frame;
if the target object in the effective picture is in a sudden change interaction scene, a standard marking position is obtained, the marking position of the target object in the last frame in the effective picture is determined by a preset second weighted average method based on the standard marking position and the tracking position of the continuous frame in the effective picture, and then marking data are added to the target object in the effective picture based on the tracking position of the continuous frame and the tracking position of the last frame.
4. The model updating method of claim 3, wherein the determining the interaction scenario of the target object in the active picture comprises:
determining pictures with difference coefficients meeting a preset second threshold value in the effective pictures by utilizing a preset data tracking algorithm;
and determining the pictures with the difference coefficients smaller than a preset second threshold value as continuous frame interactive scenes, and determining the pictures with the difference coefficients not smaller than the preset second threshold value as abrupt change interactive scenes.
5. The model updating method according to claim 3 or 4, wherein the obtaining of the standard annotation position comprises:
determining the effective picture in the mutation interaction scene as a target effective picture and sending the target effective picture to a preset labeling data receiving interface;
and receiving the marking data received by the preset marking data receiving interface, and determining the standard marking position of the target effective picture based on the marking data.
6. The model updating method according to claim 3, wherein the determining whether the trained model is better than the pre-trained model by using a preset determination rule comprises:
determining a historical data set, and adding the effective picture with the labeled data into the historical data set as a newly added sample picture to determine a current data set; the historical data set comprises a historical training set and a historical test set which are divided based on a preset first division ratio;
dividing the current data set into a current test set and a current training set by using a preset second division ratio, and testing the pre-training model and the post-training model by using the historical test set and the current test set respectively so as to obtain test results corresponding to the pre-training model and the post-training model respectively;
and judging whether the model after training is superior to the model before training or not based on the test result.
7. The method according to claim 6, wherein the obtaining the test results corresponding to the pre-training model and the post-training model respectively comprises:
respectively obtaining historical test average precision and current test average precision of the model before training and the model after training;
correspondingly, the determining whether the model after training is better than the model before training based on the test result includes:
and judging whether the historical test average precision and the current test average precision of the trained model are both greater than the historical test average precision and the current test average precision of the model before training.
8. A model updating apparatus, comprising:
the data analysis module is used for collecting target video data in a preset data collection period and determining effective pictures from the target video data by using a preset data analysis method;
the data labeling module is used for adding labeling data to the target object in the effective picture by using a preset weighted average method based on the interactive scene of the target object in the effective picture;
the model training module is used for inputting the effective pictures with the labeled data as new sample pictures into a preset pre-training model for training so as to obtain a post-training model;
the model judging module is used for determining whether the trained model is superior to the pre-training model by utilizing a preset judging rule;
the model testing module is used for carrying out online testing on the trained model if the trained model is superior to the pre-trained model and determining the credibility of the trained model based on the negative feedback rate of the trained model in the online testing process;
and the model updating module is used for taking the trained model as an operation model and setting the model before training as a backup state when the reliability is greater than a preset reliability threshold value.
9. An electronic device comprising a processor and a memory; wherein the processor, when executing the computer program stored in the memory, implements the model update method of any of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program; wherein the computer program, when executed by a processor, implements the model updating method of any one of claims 1 to 7.
CN202210998311.6A 2022-08-19 2022-08-19 Model updating method, device, equipment and medium Active CN115359341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210998311.6A CN115359341B (en) 2022-08-19 2022-08-19 Model updating method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210998311.6A CN115359341B (en) 2022-08-19 2022-08-19 Model updating method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN115359341A true CN115359341A (en) 2022-11-18
CN115359341B CN115359341B (en) 2023-11-17

Family

ID=84002416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210998311.6A Active CN115359341B (en) 2022-08-19 2022-08-19 Model updating method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115359341B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118485896A (en) * 2024-07-16 2024-08-13 天翼视联科技有限公司 Algorithm testing method and device, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10209974B1 (en) * 2017-12-04 2019-02-19 Banjo, Inc Automated model management methods
CN110264495A (en) * 2017-12-29 2019-09-20 华为技术有限公司 A kind of method for tracking target and device
CN110991476A (en) * 2019-10-18 2020-04-10 北京奇艺世纪科技有限公司 Training method and device for decision classifier, recommendation method and device for audio and video, and storage medium
US10643104B1 (en) * 2017-12-01 2020-05-05 Snap Inc. Generating data in a messaging system for a machine learning model
WO2020093694A1 (en) * 2018-11-07 2020-05-14 华为技术有限公司 Method for generating video analysis model, and video analysis system
CN113780342A (en) * 2021-08-04 2021-12-10 杭州国辰机器人科技有限公司 Intelligent detection method and device based on self-supervision pre-training and robot
WO2022048572A1 (en) * 2020-09-02 2022-03-10 杭州海康威视数字技术股份有限公司 Target identification method and apparatus, and electronic device
CN114463838A (en) * 2021-12-29 2022-05-10 浙江大华技术股份有限公司 Human behavior recognition method, system, electronic device and storage medium
WO2022156084A1 (en) * 2021-01-22 2022-07-28 平安科技(深圳)有限公司 Method for predicting behavior of target object on the basis of face and interactive text, and related device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10643104B1 (en) * 2017-12-01 2020-05-05 Snap Inc. Generating data in a messaging system for a machine learning model
US10209974B1 (en) * 2017-12-04 2019-02-19 Banjo, Inc Automated model management methods
CN110264495A (en) * 2017-12-29 2019-09-20 华为技术有限公司 A kind of method for tracking target and device
EP3723046A1 (en) * 2017-12-29 2020-10-14 Huawei Technologies Co., Ltd. Target tracking method and device
WO2020093694A1 (en) * 2018-11-07 2020-05-14 华为技术有限公司 Method for generating video analysis model, and video analysis system
CN110991476A (en) * 2019-10-18 2020-04-10 北京奇艺世纪科技有限公司 Training method and device for decision classifier, recommendation method and device for audio and video, and storage medium
WO2022048572A1 (en) * 2020-09-02 2022-03-10 杭州海康威视数字技术股份有限公司 Target identification method and apparatus, and electronic device
WO2022156084A1 (en) * 2021-01-22 2022-07-28 平安科技(深圳)有限公司 Method for predicting behavior of target object on the basis of face and interactive text, and related device
CN113780342A (en) * 2021-08-04 2021-12-10 杭州国辰机器人科技有限公司 Intelligent detection method and device based on self-supervision pre-training and robot
CN114463838A (en) * 2021-12-29 2022-05-10 浙江大华技术股份有限公司 Human behavior recognition method, system, electronic device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118485896A (en) * 2024-07-16 2024-08-13 天翼视联科技有限公司 Algorithm testing method and device, electronic device and storage medium

Also Published As

Publication number Publication date
CN115359341B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN114926766A (en) Identification method and device, equipment and computer readable storage medium
CN115359341B (en) Model updating method, device, equipment and medium
CN112434178A (en) Image classification method and device, electronic equipment and storage medium
CN115122155A (en) Machine tool remote diagnosis method and system based on industrial internet big data
CN113255590A (en) Defect detection model training method, defect detection method, device and system
CN112949456A (en) Video feature extraction model training method and device, and video feature extraction method and device
CN108521435B (en) Method and system for user network behavior portrayal
US20220343113A1 (en) Automatic model reconstruction method and automatic model reconstruction system for component recognition model
CN115904916A (en) Hard disk failure prediction method and device, electronic equipment and storage medium
CN113807319B (en) Face recognition optimization method, device, equipment and medium
CN115293735A (en) Unmanned factory industrial internet platform monitoring management method and system
CN114821396A (en) Normative detection method, device and storage medium for LNG unloading operation process
CN111062468B (en) Training method and system for generating network, and image generation method and device
CN113379683A (en) Object detection method, device, equipment and medium
CN112149833A (en) Prediction method, device, equipment and storage medium based on machine learning
CN116703905B (en) Empty material detection method, device, electronic equipment and computer readable storage medium
CN117116280B (en) Speech data intelligent management system and method based on artificial intelligence
CN112468376B (en) Network line switching method and device based on big data
CN116563770B (en) Method, device, equipment and medium for detecting vehicle color
CN118797250A (en) Equipment state data filling method, device, equipment and storage medium
CN117011537A (en) Difficult sample screening method, device, computer readable medium and electronic equipment
CN115439674A (en) Intelligent image labeling method and device based on power image knowledge graph
CN115499234A (en) Community security early warning method and system based on artificial intelligence
CN117557529A (en) Tobacco leaf warehouse internal quality information acquisition method
CN116956012A (en) Data processing method, device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant