CN116306875A - Drainage pipe network sample increment learning method based on space pre-learning and fitting - Google Patents

Drainage pipe network sample increment learning method based on space pre-learning and fitting Download PDF

Info

Publication number
CN116306875A
CN116306875A CN202310559996.9A CN202310559996A CN116306875A CN 116306875 A CN116306875 A CN 116306875A CN 202310559996 A CN202310559996 A CN 202310559996A CN 116306875 A CN116306875 A CN 116306875A
Authority
CN
China
Prior art keywords
learning
space
model
training
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310559996.9A
Other languages
Chinese (zh)
Other versions
CN116306875B (en
Inventor
姚光乐
祝钧桃
王洪辉
孙思源
吕兵
王磊
胡朝辉
彭鹏
刘瑛
何素兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Univeristy of Technology
Original Assignee
Chengdu Univeristy of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Univeristy of Technology filed Critical Chengdu Univeristy of Technology
Priority to CN202310559996.9A priority Critical patent/CN116306875B/en
Publication of CN116306875A publication Critical patent/CN116306875A/en
Application granted granted Critical
Publication of CN116306875B publication Critical patent/CN116306875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a drainage pipe network sample increment learning method based on space pre-learning and fitting, which belongs to the field of image classification and identification and comprises the steps of (1) obtaining an initial data set, (2) training to obtain an initial model capable of identifying various defect types of a drainage pipe network, (3) pre-distributing space for unknown new data appearing in next learning, fine-adjusting the initial model, (4) selecting part of training data from the data set to fit with the characteristic space of a previous model when executing an increment learning task, and simultaneously carrying out space pre-distribution for a next learning task. According to the invention, the space is pre-allocated for the new drainage pipe network image data which possibly appears in advance, meanwhile, new data selection is carried out when the new drainage pipe network image data are learned, and the pre-allocated space is fitted as much as possible, so that the model can avoid the overall recognition performance degradation of the model caused by large characteristic difference of the new and old data, and the recognition accuracy is improved.

Description

Drainage pipe network sample increment learning method based on space pre-learning and fitting
Technical Field
The invention relates to a sample increment learning method, in particular to a drainage pipe network sample increment learning method based on space pre-learning and fitting.
Background
At present, the defect identification of the drainage pipe network is generally carried out by mainly acquiring image or video data through television detection and periscope detection, and identifying the defect by a manual interpretation method, so that the method is time-consuming and labor-consuming, and underground in efficiency.
With the rapid development of deep learning in recent years, intelligent detection of defects of a drainage pipeline is started, and the method comprises the following steps: firstly, underground pipe network video shooting is carried out through a CCTV endoscopic robot, then pipe network defects in video frames are identified through an intelligent identification model based on a depth neural network, and the intelligent identification model based on the depth neural network is generally obtained by training on a group of data sets which are collected and marked in advance.
However, in the actual pipe network detection operation, the collected pipe image data gradually increases with time, and meanwhile, due to different operation scenes, the characteristics of the new and old image data and the defect characteristics are different, the whole recognition performance of the model may be reduced by the fine tuning method, and a great amount of calculation and storage resources are consumed by the zero training method. Therefore, a new identification method is required to be provided, so that iterative learning can be continuously performed from newly acquired data, the identification performance of pipe network defects can be improved, and the consumption of calculation and storage resources can be reduced.
Disclosure of Invention
The invention aims to provide a drainage pipe network sample increment learning method based on space pre-learning and fitting, which solves the problems, can continuously iterate learning and improve the recognition accuracy along with the increase of recognition images and the extension of time, and can effectively reduce the consumption of calculation and storage resources.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a drainage pipe network sample increment learning method based on space pre-learning and fitting comprises the following steps:
(1) Collecting a large number of drainage pipe network images in different scenes, marking defect types of each drainage pipe network image as labels, taking the drainage pipe network image with the labels as training samples, and forming an initial data set D by all the training samples 0 The D is 0 The defect class in the method is n classes, wherein n is a positive integer;
constructing a task group t= { t for incremental learning 1 ,t 2 ,…,t N The corresponding training data stream is d= { D } 1 ,D 2 ,…,D N I-th task t i The corresponding data set is D i The model obtained after the learning is finished is M i ,D i Comprises a plurality of training samples, a class set and D 0 The same, i=1 to n, n being the total number of data sets;
(2) Building basic learning task t 0 : using the initial dataset D 0 Training ResNet network model to obtain initial model capable of identifying n-class defect class, its characteristic space is H 0 The loss function is loss 1
(3) Pre-distributing space for next learning, wherein the pre-distributing space comprises the steps (31) - (33);
(31) Will D 0 Dividing the training samples into n class sets according to defect classes, selecting a plurality of training samples from each class set to generate simulation features, wherein one training sample x generates simulation features h (f (x)) through the following formula;
Figure SMS_1
wherein f (x) is the feature extracted by the ResNet network model for x, and lambda E (0, 1) is the weight for generating the simulation feature, and x n Training samples with the farthest Euclidean distance from x in the class set to which x belongs;
(32) According to the step (31), a plurality of simulation features are obtained for each class set, and a central value is calculated by using all the simulation features in each class set and is used as a simulation central value of the class set;
(33) Fine tuning and pre-allocation of spatially learned objective function loss by the computational model 2 The initial model is adjusted to obtain a model M 0 The characteristic space is H 1 Comprises D 0 And simulating the characteristics of the sample, H 1 As a pre-allocation space for the next learning;
Figure SMS_2
wherein y is the label of the training sample x, C (f (x)) is the prediction classification result of the initial model on x, and C (h (f (x))) is the initial model on x n Predictive classification result of l ce (C (f (x)), y) is the cross entropy loss of C (f (x)) and y, l pf (C (h (f (x))), y is the calculated cross entropy loss of C (h (f (x))) and y, and gamma E (0, 1) is loss 2 Weighting;
(4) Sequentially learning tasks in a task group t, and obtaining a model corresponding to the task after each learning, wherein the task t i The learning method of (1) comprises the steps of (41) - (42);
(41) For task t i According to D i-1 Analog center value of each class set, from D i Selecting a plurality of training samples for each class set to form a selected data set D i
(42) By D i Training M i-1 Comprising using D i And task t i-1 The resulting pre-assigned space is fitted with D at the same time i For task t i+1 Pre-allocating space to obtain t i+1 Is a preallocated space H of (1) i+1 The model obtained after the learning is finished is M i
As preferable: in step (1), the defect type includes, but is not limited to, a non-inspection job image, a defect pipe network image, or a defect-free pipe network image.
As preferable: for dataset D i To complete task t i-1 After that, the sum of new training samples obtained continuously in a period of time.
As preferable: in the step (2), the ResNet network model consists of a feature extractor and a classifier, a training sample x is sent into the ResNet network model, the feature extractor firstly extracts the feature f (x), and then the classifier outputs a prediction classification result C (f (x)), and a loss function loss of the ResNet network model 1 The following formula was used for calculation:
Figure SMS_3
as preferable: the step (32) calculates a center value by:
Figure SMS_4
c is a class set, x k For the kth training sample of c, u c Is the central value of c, N c The total number of training samples in c.
As preferable: step (41) from D i Selecting a plurality of training samples for each class set, wherein the training samples are specifically as follows;
(a1) Will D i-1 Analog center value of one class set, and D i Calculating the distance of each training sample in the corresponding class set, sorting the training samples according to the distance from small to large, and selecting the first Q training samples;
(a2) According to the method of step (a 1),by D i-1 Analog central values of the rest class sets are respectively shown in D i Q training samples are selected in the corresponding class set to obtain n multiplied by Q training samples in total, and a selected data set D is formed i
As preferable: in step (42), D is used i Training M i-1 Obtaining M i Is of the objective function of loss 3 The following formula is adopted for calculation:
Figure SMS_5
wherein, I fit In order to fit the pre-allocated space loss, I.I 2 For the L2 norm, ε is the weight of the fit pre-allocated space penalty.
Regarding step (1): the invention is based on sample increment learning, and continuously carries out sample increment learning in a newly acquired training data set, and only the data set D can be read in the ith sample increment learning i The data in (2) is obtained after learning is finished, and the model is M i
In the present invention, the initial data set D 0 With training data set D t The set of categories in (a) are identical, e.g. D 0 The defect categories in (a) are three categories: non-detection operation image, defect pipe network image or defect-free pipe network image, respectively marked as class 1, class 2 and class 3, then D t The defect categories in (a) are also of these three categories, namely n=3. Of course, the defect type can be designed according to practical situations, and is not limited to the three types. In addition, an initial dataset D 0 The data volume is larger than D, which is acquired in a large amount in different environments and different scenes in the earlier stage t
With respect to step (2), this step is based on the initial dataset D 0 Training to obtain an initial model. When the initial data set D is obtained for the first time 0 Firstly, the initial model of drain pipe network defect identification is learned, and the invention adopts ResNet network model and uses a feature extractorf(. Cndot.) and a classifierC(. Cndot.) composition. The input of the model is the initial dataset D 0 One of the training samples, marked asxBy means of a feature extractorfExtraction of featuresf(x) The prediction classification result is then output by the classifierC(f(x) The loss function adopts cross entropy loss in the learning of the model 1
With respect to step (3), this step is to pre-allocate space for the next learning. The method is characterized in that new training samples are necessarily introduced in the next learning process, due to the fact that different operation scenes exist, the collected drainage pipe network image data at different time have certain differences in characteristics and defects, and the overall model identification performance of new and old data is reduced due to the fact that the characteristic differences are large, so that a batch of simulation data are generated through simulation by analyzing a database of the task, and the simulation data and the database of the task are combined together, and the model is processed through loss 2 The model is adjusted, so that an adjusted model is obtained, the characteristic space of the model simultaneously comprises the characteristics of training samples in the round of database and the characteristics of all simulation data, and the basic learning task t is realized 0 In other words, the feature space of the initial model is H 0 Comprising only D 0 But M of the adjusted model 0 The characteristic space is H 1 . Because the generation of the simulation data is carried out according to the step (31) rule of the invention, the pre-allocation space can be fitted by using the new data, so that the model has backward compatible learning capability and is more suitable for practical application scenes.
With respect to step (41), the task training sample is selected. The invention uses the simulation data of the last task to screen part of training samples in the database of the current task, instead of directly using all samples in the database of the current task. For example, for task t 1 The corresponding database is D 1 To be used D 0 The generated simulated features are filtered. The method specifically comprises the following steps: step (31), D, of the invention 0 Each class set of the set generates a set of simulated features, and then the step (32) is performed on D according to the invention 0 Each class set in the list calculates a simulation center value to remove D 1 The training samples are selected in the corresponding class set, the selected standard is based on the distance between the training samples and the simulation center value, and only the distance is kept small so as to be convenient for better fitting the pre-allocation space, and three classes of defect classes are assumed, wherein the classes of each data setThe number of the diversity sets is 3, the number of the simulation center values is also three, and D is used 0 Respectively removing D from 3 analog center values of (C) 1 Q training samples are selected from the corresponding class set, and a selected data set D is formed 1 Then 3 xq training samples are included, by taking the defect class n, D # 1 Then n x Q training samples are included.
With respect to step (42), for task t i Two parts of work are completed simultaneously, on one hand, D is i Features of (2)Obtaining a preallocated space H by embedding and fitting a previous task i On the other hand, D is also required i For task t i+1 Pre-allocating space to obtain t i+1 Is a preallocated space H of (1) i+1 . The method of preallocation of space is the same as in step (3) of the present invention. Step (42) is performed by adjusting the objective function loss 3 An implementation in which the weights of the spatial penalty are pre-assigned for fitting. The method aims at enabling the characteristics of the model in a new characteristic space to approach to the space center pre-allocated in the previous stage when learning the image data of the new drainage pipe network, avoiding extrusion and coverage of embedding the characteristics of the image data of the old drainage pipe network, and obtaining the model M after learning is finished i, The pre-allocation space of new study is H i+1
Compared with the prior art, the invention has the advantages that:
(1) The invention can overcome the defects that in the actual pipe network detection operation, the collected pipe image data gradually increases along with time or the operation scene is different, and the characteristics of new and old image data and the defect characteristics are also different, thereby causing the reduction of the accuracy of the identification model, needing no learned old drainage pipe network image data, and saving storage resources; the embedded space is allocated in advance for the new data through space pre-learning, so that the integral recognition performance of the model is prevented from being reduced due to large characteristic difference of the new data and the old data; and the new data is used for fitting the pre-allocation space, so that the model has backward compatible learning capacity and is more suitable for practical application scenes.
(2) The invention is based on a sample increment learning method, but is different from the existing increment recognition, the existing sample increment is the type which is recognized each time along with the increase of learning tasks, but the invention does not increase the number of the recognized defect types, but continuously learns the defect data of the same type, thereby continuously improving the detection precision and finally reaching the recognition accuracy of more than 95 percent. Meanwhile, the method provided by the invention has very high recognition speed on the defect image of the drainage pipeline, and the recognition speed can reach about 24.5 ms/frame under the equipment of RTX2080 display card and i5-5850 Cpu.
Drawings
FIG. 1 is a flow chart of a learning method of the present invention;
fig. 2 is a schematic diagram of a pre-allocation space.
Description of the embodiments
The invention will be further described with reference to the accompanying drawings.
Example 1: referring to fig. 1 and 2, a drainage pipe network sample increment learning method based on space pre-learning and fitting includes the following steps:
(1) Collecting a large number of drainage pipe network images in different scenes, marking defect types of each drainage pipe network image as labels, taking the drainage pipe network image with the labels as training samples, and forming an initial data set D by all the training samples 0 The D is 0 The defect class in the method is n classes, and n is a positive integer.
Constructing a task group t= { t for incremental learning 1 ,t 2 ,…,t N The corresponding training data stream is d= { D } 1 ,D 2 ,…,D N I-th task t i The corresponding data set is D i The model obtained after the learning is finished is M i ,D i Comprises a plurality of training samples, a class set and D 0 And the same, i=1 to n, wherein n is the total number of the data sets.
(2) Building basic learning task t 0 : using the initial dataset D 0 Training ResNet network model to obtain initial model capable of identifying n-class defect class, its characteristic space is H 0 The loss function is loss 1
(3) Pre-distributing space for next learning, wherein the pre-distributing space comprises the steps (31) - (33);
(31) Will D 0 Middle training sample pressThe defect class is divided into n class sets, a plurality of training samples are selected from each class set to generate simulation features, and one training sample x generates the simulation features h (f (x)) through the following formula;
Figure SMS_6
wherein f (x) is the feature extracted by the ResNet network model for x, and lambda E (0, 1) is the weight for generating the simulation feature, and x n The training samples with the farthest Euclidean distance from x are concentrated for the category to which x belongs.
(32) According to step (31), a plurality of simulation features are obtained for each class set, and a central value is calculated for each class set by using all the simulation features as the simulation central value of the class set.
(33) Fine tuning and pre-allocation of spatially learned objective function loss by the computational model 2 The initial model is adjusted to obtain a model M 0 The characteristic space is H 1 Comprises D 0 And simulating the characteristics of the sample, H 1 As a pre-allocation space for the next learning;
Figure SMS_7
wherein y is the label of the training sample x, C (f (x)) is the prediction classification result of the initial model on x, and C (h (f (x))) is the initial model on x n Predictive classification result of l ce (C (f (x)), y) is the cross entropy loss of C (f (x)) and y, l pf (C (h (f (x))), y is the calculated cross entropy loss of C (h (f (x))) and y, and gamma E (0, 1) is loss 2 And (5) weighting.
(4) Sequentially learning tasks in a task group t, and obtaining a model corresponding to the task after each learning, wherein the task t i The learning method of (1) comprises the steps of (41) - (42);
(41) For task t i According to D i-1 Analog center value of each class set, from D i Selecting a plurality of training samples for each class set to form a selected data set D i。
(42) By usingD* i Training M i-1 Comprising using D i And task t i-1 The resulting pre-assigned space is fitted with D at the same time i For task t i+1 Pre-allocating space to obtain t i+1 Is a preallocated space H of (1) i+1 The model obtained after the learning is finished is M i
In this embodiment, in the step (1), the defect type includes, but is not limited to, a non-inspection operation image, a defect pipe network image, or a defect-free pipe network image. But are not limited to, for example, defects may be specifically classified for defective pipe network images.
For dataset D i To complete task t i-1 After that, the sum of new training samples obtained continuously in a period of time. The number of which is smaller than D 0 But belongs to new data accumulated over a period of time.
In the step (2), the ResNet network model consists of a feature extractor and a classifier, a training sample x is sent into the ResNet network model, the feature extractor firstly extracts the feature f (x), and then the classifier outputs a prediction classification result C (f (x)), and a loss function loss of the ResNet network model 1 The following formula was used for calculation:
Figure SMS_8
the step (32) calculates the center value by the following equation.
Figure SMS_9
c is a class set, x k For the kth training sample of c, u c Is the central value of c, N c The total number of training samples in c.
Step (41) from D i The method comprises the following steps of selecting a plurality of training samples for each class set:
(a1) Will D i-1 Analog center value of one class set, and D i And calculating the distance of each training sample in the corresponding class set, sorting the training samples according to the distance from small to large, and selecting the first Q training samples.
(a2) According to the method of step (a 1), using D i-1 Analog central values of the rest class sets are respectively shown in D i Q training samples are selected in the corresponding class set to obtain n multiplied by Q training samples in total, and a selected data set D is formed i
In step (42), D is used i Training M i-1 Obtaining M i Is of the objective function of loss 3 The following formula is adopted for calculation:
Figure SMS_10
wherein, I fit In order to fit the pre-allocated space loss, I.I 2 For the L2 norm, ε is the weight of the fit pre-allocated space penalty.
Example 2: referring to fig. 1 and 2, we present a specific method based on example 1:
regarding step (1): 5000 drainage pipe network images are obtained in different scenes, the defect types are classified into three types, namely a non-detection operation image, a defect pipe network image or a defect-free pipe network image, the 5000 drainage pipe network images are marked with the defect types respectively to form 5000 training samples, and an initial data set D is synthesized 0
Regarding step (2): by D 0 Training ResNet network model to obtain initial model capable of identifying three kinds of defect types, and its characteristic space is H 0 The loss function is loss 1
Pre-allocating space for next learning with respect to step (3):
D 0 the training samples of the model (1) are divided into 3 class sets according to the defect class, namely class 1 to class 3 respectively, and 1500, 1500 and 2000 training samples are respectively contained, a batch of simulation features are generated according to the step (31) for class 1, a central value is calculated according to the step (32), and the same operation is carried out on class 2 and class 3 to obtain D 0 Analog center values of class 1 to class 3; then according to step (33), the initial model is adjusted by using 5000 training samples and all the simulation features, and if 200 simulation features are generated for each class set, 5000+200×3=5600 features participate in the initial modelDuring the adjustment process. Finally, after adjustment, a model M is obtained 0 The characteristic space is H 1
Next learn task t 1
First use D 0 D is removed from the analog center values of class 1 to class 3 1 In the method, 200 training samples are selected for each class set, and 600 training samples are used to form a selected data set D 1 The method comprises the steps of carrying out a first treatment on the surface of the Reuse of D 1 Training M 0 Comprising using D 1 And H is 1 Fitting and simultaneously using D 1 For task t 2 Pre-allocating space to obtain t 2 Is a preallocated space H of (1) 2 Task t is obtained after training is completed 1 Is a model of (a). By D 1 For task t 2 The method for pre-distributing the space is the same as the step (3), or the simulation features are generated, the center value is calculated and the loss is used 2 For M 0 And (5) adjusting.
Task t of co-learning 2 、t 3 Etc. with task t 1 The same applies.
Example 3: referring to fig. 1 and 2, based on example 2, we adjust the number of training samples in the dataset, D 0 In the method, 1500 non-detection operation images, 1500 defect pipe network images and 2000 defect pipe network images are subjected to incremental learning for 5 times, the three types of images are respectively increased by 150, 150 and 200, the identification accuracy rate is increased from 90% to more than 95%, and the identification speed reaches about 24.5 ms/frame.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (7)

1. The drainage pipe network sample increment learning method based on space pre-learning and fitting is characterized by comprising the following steps of:
(1) Collecting a large number of drainage pipe network images in different scenes, marking defect types of each drainage pipe network image as labels, taking the drainage pipe network image with the labels as a training sample,all training samples form an initial dataset D 0 The D is 0 The defect class in the method is n classes, wherein n is a positive integer;
constructing a task group t= { t for incremental learning 1 ,t 2 ,…,t N The corresponding training data stream is d= { D } 1 ,D 2 ,…,D N I-th task t i The corresponding data set is D i The model obtained after the learning is finished is M i ,D i Comprises a plurality of training samples, a class set and D 0 The same, i=1 to n, n being the total number of data sets;
(2) Building basic learning task t 0 : using the initial dataset D 0 Training ResNet network model to obtain initial model capable of identifying n-class defect class, its characteristic space is H 0 The loss function is loss 1
(3) Pre-distributing space for next learning, wherein the pre-distributing space comprises the steps (31) - (33);
(31) Will D 0 Dividing the training samples into n class sets according to defect classes, selecting a plurality of training samples from each class set to generate simulation features, wherein one training sample x generates simulation features h (f (x)) through the following formula;
Figure QLYQS_1
wherein f (x) is the feature extracted by the ResNet network model for x, and lambda E (0, 1) is the weight for generating the simulation feature, and x n Training samples with the farthest Euclidean distance from x in the class set to which x belongs;
(32) According to the step (31), a plurality of simulation features are obtained for each class set, and a central value is calculated by using all the simulation features in each class set and is used as a simulation central value of the class set;
(33) Fine tuning and pre-allocation of spatially learned objective function loss by the computational model 2 The initial model is adjusted to obtain a model M 0 The characteristic space is H 1 Comprises D 0 And simulating the characteristics of the sample, H 1 As a pre-allocation space for the next learning;
Figure QLYQS_2
wherein y is the label of the training sample x, C (f (x)) is the prediction classification result of the initial model on x, and C (h (f (x))) is the initial model on x n Predictive classification result of l ce (C (f (x)), y) is the cross entropy loss of C (f (x)) and y, l pf (C (h (f (x))), y is the calculated cross entropy loss of C (h (f (x))) and y, and gamma E (0, 1) is loss 2 Weighting;
(4) Sequentially learning tasks in a task group t, and obtaining a model corresponding to the task after each learning, wherein the task t i The learning method of (1) comprises the steps of (41) - (42);
(41) For task t i According to D i-1 Analog center value of each class set, from D i Selecting a plurality of training samples for each class set to form a selected data set D i
(42) By D i Training M i-1 Comprising using D i And task t i-1 The resulting pre-assigned space is fitted with D at the same time i For task t i+1 Pre-allocating space to obtain t i+1 Is a preallocated space H of (1) i+1 The model obtained after the learning is finished is M i
2. The method for incremental learning of a drainage network sample based on spatial pre-learning and fitting of claim 1, wherein in step (1), the defect class includes, but is not limited to, a non-detection job image, a defect network image, or a defect-free network image.
3. The drainage pipe network sample increment learning method based on space pre-learning and fitting according to claim 1, wherein for the data set D i To complete task t i-1 After that, the sum of new training samples obtained continuously in a period of time.
4. The incremental learning method of drainage network samples based on space pre-learning and fitting according to claim 1, wherein in the step (2), the ResNet network model consists of a feature extractor and a classifier, the training sample x is sent into the ResNet network model, the feature extractor extracts the feature f (x) and then the classifier outputs the prediction classification result C (f (x)), and the loss function loss of the ResNet network model 1 The following formula was used for calculation:
Figure QLYQS_3
5. the drainage pipe network sample increment learning method based on space pre-learning and fitting according to claim 1, wherein the step (32) calculates the central value by the following formula:
Figure QLYQS_4
c is a class set, x k For the kth training sample of c, u c Is the central value of c, N c The total number of training samples in c.
6. The method for incremental learning of a drainage network sample based on spatial pre-learning and fitting of claim 4, wherein step (41) is performed from D i Selecting a plurality of training samples for each class set, wherein the training samples are specifically as follows;
(a1) Will D i-1 Analog center value of one class set, and D i Calculating the distance of each training sample in the corresponding class set, sorting the training samples according to the distance from small to large, and selecting the first Q training samples;
(a2) According to the method of step (a 1), using D i-1 Analog central values of the rest class sets are respectively shown in D i Q training samples are selected in the corresponding class set to obtain n multiplied by Q training samples in total, and a selected data set D is formed i
7. The method for incremental learning of a drainage network sample based on spatial pre-learning and fitting of claim 4 wherein in step (42) D is used i Training M i-1 Obtaining M i Is of the objective function of loss 3 The following formula is adopted for calculation:
Figure QLYQS_5
wherein, I fit In order to fit the pre-allocated space loss, I.I 2 For the L2 norm, ε is the weight of the fit pre-allocated space penalty.
CN202310559996.9A 2023-05-18 2023-05-18 Drainage pipe network sample increment learning method based on space pre-learning and fitting Active CN116306875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310559996.9A CN116306875B (en) 2023-05-18 2023-05-18 Drainage pipe network sample increment learning method based on space pre-learning and fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310559996.9A CN116306875B (en) 2023-05-18 2023-05-18 Drainage pipe network sample increment learning method based on space pre-learning and fitting

Publications (2)

Publication Number Publication Date
CN116306875A true CN116306875A (en) 2023-06-23
CN116306875B CN116306875B (en) 2023-08-01

Family

ID=86785287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310559996.9A Active CN116306875B (en) 2023-05-18 2023-05-18 Drainage pipe network sample increment learning method based on space pre-learning and fitting

Country Status (1)

Country Link
CN (1) CN116306875B (en)

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102525795A (en) * 2012-01-16 2012-07-04 沈阳理工大学 Fast automatic positioning method of foot massaging robot
CN104331459A (en) * 2014-10-31 2015-02-04 百度在线网络技术(北京)有限公司 Online learning-based network resource recommendation method and device
CN107683460A (en) * 2015-05-05 2018-02-09 凯恩迪股份有限公司 The QUANTON that analog quantity increment calculates in conventional processors is represented
CN111199242A (en) * 2019-12-18 2020-05-26 浙江工业大学 Image increment learning method based on dynamic correction vector
CN111931807A (en) * 2020-06-24 2020-11-13 浙江大学 Small sample class incremental learning method based on feature space combination
WO2020238882A1 (en) * 2019-05-31 2020-12-03 西北工业大学 Artificial intelligence-assisted printed electronics technology-based self-guided optimization method
CN112115967A (en) * 2020-08-06 2020-12-22 中山大学 Image increment learning method based on data protection
US20210241112A1 (en) * 2020-02-05 2021-08-05 Samsung Electronics Co., Ltd. Neural network update method, classification method and electronic device
CN113344144A (en) * 2021-07-29 2021-09-03 中国人民解放军国防科技大学 Semi-supervised small sample class increment learning method and device and classification identification method
CN113361635A (en) * 2021-06-30 2021-09-07 成都理工大学 Small sample class incremental learning method based on representation prediction
US20210319265A1 (en) * 2020-11-02 2021-10-14 Zhengzhou University Method for segmentation of underground drainage pipeline defects based on full convolutional neural network
WO2022042002A1 (en) * 2020-08-31 2022-03-03 华为技术有限公司 Training method for semi-supervised learning model, image processing method, and device
CN114386482A (en) * 2021-12-15 2022-04-22 同济大学 Image classification system and method based on semi-supervised incremental learning
CN114460936A (en) * 2022-01-13 2022-05-10 华中科技大学 Off-line incremental learning-based automatic driving automobile path planning method and system
CN114529752A (en) * 2022-01-04 2022-05-24 成都理工大学 Sample increment learning method based on deep neural network
US20220180975A1 (en) * 2019-01-28 2022-06-09 The Broad Institute, Inc. Methods and systems for determining gene expression profiles and cell identities from multi-omic imaging data
CN115201777A (en) * 2022-07-26 2022-10-18 电子科技大学 Radar one-dimensional image target increment identification method based on embedding reservation
CN115249313A (en) * 2022-08-11 2022-10-28 西北工业大学 Image classification method based on meta-module fusion incremental learning
CN115438755A (en) * 2022-11-08 2022-12-06 腾讯科技(深圳)有限公司 Incremental training method and device of classification model and computer equipment
CN115496983A (en) * 2022-09-23 2022-12-20 西安交通大学 Class increment learning method and system guided by semantic knowledge
CN115828100A (en) * 2022-12-08 2023-03-21 上海成电福智科技有限公司 Mobile phone radiation source spectrogram category increment learning method based on deep neural network
CN115879533A (en) * 2022-12-02 2023-03-31 西安交通大学 Analog incremental learning method and system based on analog learning
CN115937524A (en) * 2022-12-30 2023-04-07 天津大学 Similar increment semantic segmentation method based on dynamic knowledge distillation
CN116071592A (en) * 2023-02-16 2023-05-05 中国农业大学 Corn seed variety identification method and system based on hyperspectral incremental updating

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102525795A (en) * 2012-01-16 2012-07-04 沈阳理工大学 Fast automatic positioning method of foot massaging robot
CN104331459A (en) * 2014-10-31 2015-02-04 百度在线网络技术(北京)有限公司 Online learning-based network resource recommendation method and device
CN107683460A (en) * 2015-05-05 2018-02-09 凯恩迪股份有限公司 The QUANTON that analog quantity increment calculates in conventional processors is represented
US20220180975A1 (en) * 2019-01-28 2022-06-09 The Broad Institute, Inc. Methods and systems for determining gene expression profiles and cell identities from multi-omic imaging data
WO2020238882A1 (en) * 2019-05-31 2020-12-03 西北工业大学 Artificial intelligence-assisted printed electronics technology-based self-guided optimization method
CN111199242A (en) * 2019-12-18 2020-05-26 浙江工业大学 Image increment learning method based on dynamic correction vector
US20210241112A1 (en) * 2020-02-05 2021-08-05 Samsung Electronics Co., Ltd. Neural network update method, classification method and electronic device
CN111931807A (en) * 2020-06-24 2020-11-13 浙江大学 Small sample class incremental learning method based on feature space combination
CN112115967A (en) * 2020-08-06 2020-12-22 中山大学 Image increment learning method based on data protection
WO2022042002A1 (en) * 2020-08-31 2022-03-03 华为技术有限公司 Training method for semi-supervised learning model, image processing method, and device
US20210319265A1 (en) * 2020-11-02 2021-10-14 Zhengzhou University Method for segmentation of underground drainage pipeline defects based on full convolutional neural network
CN113361635A (en) * 2021-06-30 2021-09-07 成都理工大学 Small sample class incremental learning method based on representation prediction
CN113344144A (en) * 2021-07-29 2021-09-03 中国人民解放军国防科技大学 Semi-supervised small sample class increment learning method and device and classification identification method
CN114386482A (en) * 2021-12-15 2022-04-22 同济大学 Image classification system and method based on semi-supervised incremental learning
CN114529752A (en) * 2022-01-04 2022-05-24 成都理工大学 Sample increment learning method based on deep neural network
CN114460936A (en) * 2022-01-13 2022-05-10 华中科技大学 Off-line incremental learning-based automatic driving automobile path planning method and system
CN115201777A (en) * 2022-07-26 2022-10-18 电子科技大学 Radar one-dimensional image target increment identification method based on embedding reservation
CN115249313A (en) * 2022-08-11 2022-10-28 西北工业大学 Image classification method based on meta-module fusion incremental learning
CN115496983A (en) * 2022-09-23 2022-12-20 西安交通大学 Class increment learning method and system guided by semantic knowledge
CN115438755A (en) * 2022-11-08 2022-12-06 腾讯科技(深圳)有限公司 Incremental training method and device of classification model and computer equipment
CN115879533A (en) * 2022-12-02 2023-03-31 西安交通大学 Analog incremental learning method and system based on analog learning
CN115828100A (en) * 2022-12-08 2023-03-21 上海成电福智科技有限公司 Mobile phone radiation source spectrogram category increment learning method based on deep neural network
CN115937524A (en) * 2022-12-30 2023-04-07 天津大学 Similar increment semantic segmentation method based on dynamic knowledge distillation
CN116071592A (en) * 2023-02-16 2023-05-05 中国农业大学 Corn seed variety identification method and system based on hyperspectral incremental updating

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ALEJOELEUTERIO ROBERTO等: ""A Class-Incremental Learning Method Based on Preserving the Learned Feature Space for EEG-Based Emotion Recognition"", 《MATHEMATICS》, vol. 10, no. 4, pages 598 - 609 *
GUANGLE YAO等: ""Few-shot class-incremental learning based on representation enhancement"", 《JOURNAL OF ELECTRONIC IMAGING》, vol. 31, no. 4, pages 1 - 14 *
JUNTAO ZHU等: ""Feature Distribution Distillation-Based Few Shot Class Incremental Learning"", 《2022 THE 5TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE》, pages 108 - 113 *
姚光乐等: ""基于特征分布学习的小样本类增量学习"", 《计算机工程与应用》, pages 1 - 9 *
朱飞等: ""类别增量学习研究进展和性能评价"", 《自动化学报》, vol. 49, no. 3, pages 1 - 26 *
李雪等: ""智能交互的物体识别增量学习技术综述"", 《智能系统学报》, vol. 12, no. 2, pages 140 - 149 *

Also Published As

Publication number Publication date
CN116306875B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN108681752B (en) Image scene labeling method based on deep learning
CN107679465B (en) It is a kind of that data generation and extending method are identified based on the pedestrian for generating network again
CN112308158A (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN111340123A (en) Image score label prediction method based on deep convolutional neural network
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN109740679B (en) Target identification method based on convolutional neural network and naive Bayes
CN114092769B (en) Transformer substation multi-scene inspection analysis method based on federal learning
CN111310756A (en) Damaged corn particle detection and classification method based on deep learning
CN113610035B (en) Rice tillering stage weed segmentation and identification method based on improved coding and decoding network
CN112926485A (en) Few-sample sluice image classification method
CN115810191A (en) Pathological cell classification method based on multi-attention fusion and high-precision segmentation network
CN111161244A (en) Industrial product surface defect detection method based on FCN + FC-WXGboost
CN113610024B (en) Multi-strategy deep learning remote sensing image small target detection method
CN115272225A (en) Strip steel surface defect detection method and system based on countermeasure learning network
CN109740672B (en) Multi-stream feature distance fusion system and fusion method
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
CN112132839B (en) Multi-scale rapid face segmentation method based on deep convolution cascade network
CN116306875B (en) Drainage pipe network sample increment learning method based on space pre-learning and fitting
CN111832479A (en) Video target detection method based on improved self-adaptive anchor R-CNN
CN114111647A (en) Artificial intelligence-based method and system for measuring damaged area of insulator umbrella skirt
CN114359886A (en) Natural scene text recognition method based on multi-level feature selection
CN111046861B (en) Method for identifying infrared image, method for constructing identification model and application
CN114612787A (en) Urban green land deep learning extraction method supported by scale change strategy
CN114627005A (en) Rain density classification guided dual-stage single image rain removing method
CN117173551B (en) Scene self-adaptive unsupervised underwater weak and small target detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant