CN117456441A - Monitoring method and system for rust area expansion by combining change area identification - Google Patents

Monitoring method and system for rust area expansion by combining change area identification Download PDF

Info

Publication number
CN117456441A
CN117456441A CN202311248212.7A CN202311248212A CN117456441A CN 117456441 A CN117456441 A CN 117456441A CN 202311248212 A CN202311248212 A CN 202311248212A CN 117456441 A CN117456441 A CN 117456441A
Authority
CN
China
Prior art keywords
image
area
scene
change
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311248212.7A
Other languages
Chinese (zh)
Inventor
叶楠
陈俊琪
廖以随
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinqianmao Technology Co ltd
Original Assignee
Jinqianmao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinqianmao Technology Co ltd filed Critical Jinqianmao Technology Co ltd
Priority to CN202311248212.7A priority Critical patent/CN117456441A/en
Publication of CN117456441A publication Critical patent/CN117456441A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of neural network model application, in particular to a monitoring method and a monitoring system for rust area expansion by combining with change area identification, which are used for acquiring an image to be detected and a reference image, wherein the reference image is an image of an initial state of a shooting area of the image to be detected; analyzing the image to be detected and the reference image based on a neural network model, determining that the image to be detected with the change is a target image, and marking a change area on the target image; extracting the image of the target image according to the change area to obtain a target area image; classifying the target area image based on a classification model, and judging whether the target area image is a rusted area or not; according to the invention, the trend of enlarging the rust area of the important area is monitored by the high-definition camera, so that the identification accuracy is high, the monitoring can be performed uninterruptedly, the shift change is not needed, the manual negligence is avoided, and the cost is low.

Description

Monitoring method and system for rust area expansion by combining change area identification
The present application is a divisional application taking an invention patent with application date of 2023, 08, 14, application number of 202311013671.7 and name of a method and a system for monitoring rust area expansion based on a neural network model as a parent.
Technical Field
The invention relates to the technical field of neural network model application, in particular to a monitoring method and a system for rust area expansion by combining with change area identification.
Background
In some important places, such as stations, workshops, laboratories, airports, etc., critical areas may be a potential safety hazard if rust corrosion and a trend of area expansion occur without timely discovery. The requirements for rust and spot corrosion expansion detection of key areas are wide.
The prior art generally collects images through a camera, and is monitored and analyzed manually. However, manual monitoring may require a lot of labor and continuous investment in labor cost, or by means of image comparison, the original image is compared with the change image to be detected to find the change area, but this method is affected by the environment such as illumination transformation, resulting in a certain false detection. Therefore, the method for monitoring the rust area of the key area by using the automatic monitoring equipment such as the high-definition camera to perform fixed-point monitoring and realize the monitoring of the rust area expansion of the key area, and the method for alarming and relieving the potential safety hazard in time is a technical problem to be solved urgently at present.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the method and the system for monitoring the rust area expansion by combining the change area identification are provided, and the monitoring of the rust area expansion by using the camera is low in cost and high in accuracy.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for monitoring rust area expansion based on a neural network model comprises the following steps:
s1, acquiring an image to be detected and a reference image, wherein the reference image is an image of an initial state of a shooting area of the image to be detected;
s2, analyzing the image to be detected and the reference image based on a neural network model, determining that the image to be detected with the change is a target image, and marking a change area on the target image;
s3, extracting the image of the target image according to the change area to obtain a target area image;
and S4, classifying the target area image based on a classification model, and judging whether the target area image is a rusted area or not.
A method for monitoring the area expansion of rust spots combined with the identification of a change area comprises the following steps:
s1, acquiring an image to be detected and a reference image, wherein the reference image is an image of an initial state of a shooting area of the image to be detected;
s2, analyzing the image to be detected and the reference image based on a neural network model, determining that the image to be detected with the change is a target image, and marking a change area on the target image;
the step S2 includes the steps of:
s21, scene knowledge is collected in advance to carry out model training, and a scene model is obtained based on a change detection network ChangeNet;
the change detection network in the step S21 is specifically a change detection network based on a twin neural network and a full convolution neural network;
the scene model implementation steps include:
s211, acquiring a residual network as a pre-training model by a migration learning method, wherein the components of residual blocks of the residual network comprise a convolution layer, batch regularized BN and an activation function ReLU;
s212, extracting features from the reference image and the image to be detected through a twin neural network, and outputting variable positioning information of different layers by combining convolution so as to capture rough information and detailed information in the image, wherein the twin neural network is based on the pre-training model;
s213, integrating the extracted features by using a full convolutional neural network FCN, adding and classifying by using the same normalized exponential function softmax to obtain a change region;
s22, registering and aligning the image to be detected and the reference image:
extracting SIFT feature points of the image to be detected and the reference image;
finding matched characteristic point pairs by carrying out similarity measurement;
obtaining a space coordinate transformation parameter through the matched characteristic point pairs;
image registration is carried out by the coordinate transformation parameters, so that pixel point coordinates of two images can be in one-to-one correspondence, and registration alignment is realized;
s23, carrying out change detection on the image to be detected after registration alignment operation is completed by using a scene model, and outputting coordinate position information of a change area;
s3, extracting the image of the target image according to the change area to obtain a target area image;
s4, classifying the target area image based on a classification model, and judging whether the target area image is a rust area or not;
s5, judging whether the area of the rust area exceeds a set threshold value, and if so, pushing alarm information of rust trend change to the appointed user side.
In order to solve the technical problems, the invention adopts another technical scheme that:
the monitoring system based on the neural network model for the rust area expansion comprises a monitoring terminal and at least one monitoring device, wherein the monitoring terminal comprises a processor, a memory and a computer program stored in the memory, and when the processor executes the computer program, the processor receives an image to be detected acquired by the monitoring device and realizes the steps in the monitoring method based on the neural network model for the rust area expansion.
The monitoring system for rust area expansion combined with change area identification comprises a monitoring terminal and at least one monitoring device, wherein the monitoring terminal comprises a processor, a memory and a computer program stored in the memory, and when the processor executes the computer program, the processor receives an image to be detected acquired by the monitoring device and realizes the following steps:
s1, acquiring an image to be detected and a reference image, wherein the reference image is an image of an initial state of a shooting area of the image to be detected;
s2, analyzing the image to be detected and the reference image based on a neural network model, determining that the image to be detected with the change is a target image, and marking a change area on the target image;
the step S2 includes the steps of:
s21, scene knowledge is collected in advance to carry out model training, and a scene model is obtained based on a change detection network ChangeNet;
the change detection network in the step S21 is specifically a change detection network based on a twin neural network and a full convolution neural network;
the scene model implementation steps include:
s211, acquiring a residual network as a pre-training model by a migration learning method, wherein the components of residual blocks of the residual network comprise a convolution layer, batch regularized BN and an activation function ReLU;
s212, extracting features from the reference image and the image to be detected through a twin neural network, and outputting variable positioning information of different layers by combining convolution so as to capture rough information and detailed information in the image, wherein the twin neural network is based on the pre-training model;
s213, integrating the extracted features by using a full convolutional neural network FCN, adding and classifying by using the same normalized exponential function softmax to obtain a change region;
s22, registering and aligning the image to be detected and the reference image:
extracting SIFT feature points of the image to be detected and the reference image;
finding matched characteristic point pairs by carrying out similarity measurement;
obtaining a space coordinate transformation parameter through the matched characteristic point pairs;
image registration is carried out by the coordinate transformation parameters, so that pixel point coordinates of two images can be in one-to-one correspondence, and registration alignment is realized;
s23, carrying out change detection on the image to be detected after registration alignment operation is completed by using a scene model, and outputting coordinate position information of a change area;
s3, extracting the image of the target image according to the change area to obtain a target area image;
s4, classifying the target area image based on a classification model, and judging whether the target area image is a rust area or not;
s5, judging whether the area of the rust area exceeds a set threshold value, and if so, pushing alarm information of rust trend change to the appointed user side.
The invention has the beneficial effects that: according to the monitoring method and the system for the rust area expansion based on the neural network model, the positions and the types of the areas of the change areas are obtained by obtaining the images to be detected and the reference images and utilizing the neural network model obtained through pre-training, and whether the rust area expansion is detected is judged, so that the rust area expansion trend of the important area is monitored through the high-definition camera, the identification accuracy is high, the monitoring can be carried out continuously, the change of the post and the shift is not needed, the manual negligence is avoided, and the cost is low.
Drawings
FIG. 1 is a flow chart of a method for monitoring rust area expansion based on a neural network model according to an embodiment of the invention;
FIG. 2 is a block diagram of a monitoring system for rust area expansion based on a neural network model according to an embodiment of the present invention;
description of the reference numerals:
1. a monitoring system for rust area expansion based on a neural network model; 2. monitoring a terminal; 3. a processor; 4. a memory; 5. and monitoring the equipment.
Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, a method for monitoring rust area expansion based on a neural network model includes the steps of:
s1, acquiring an image to be detected and a reference image, wherein the reference image is an image of an initial state of a shooting area of the image to be detected;
s2, analyzing the image to be detected and the reference image based on a neural network model, determining that the image to be detected with the change is a target image, and marking a change area on the target image;
s3, extracting the image of the target image according to the change area to obtain a target area image;
and S4, classifying the target area image based on a classification model, and judging whether the target area image is a rusted area or not.
From the above description, the beneficial effects of the invention are as follows: the method has the advantages that the position of the change area and the category of the area are obtained by the aid of the neural network model obtained through training in advance, and whether the change area is a rusted area is judged, so that detection of the expansion of the rusted area is achieved, the expansion trend of the rusted area of the important area is monitored through the high-definition camera, the identification accuracy is high, the monitoring can be carried out continuously, the change of the sentry and the shift is not needed, manual negligence is avoided, and the cost is low.
Further, the step S2 includes the steps of:
s21, acquiring a large amount of scene knowledge in advance to perform model training, and acquiring a scene model based on a change Net of the change detection network;
s22, registering and aligning the image to be detected and the reference image;
s23, carrying out change detection on the image to be detected after registration and alignment operation is completed by using a scene model, and outputting coordinate position information of a change area.
From the above description, it can be seen that the scene model is used to detect the change region of the image, and the scene model is based on the change monitoring network, and the image to be detected and the reference image are registered to detect the change region of the image to be detected and the reference image, so that the change region of the image to be detected and the reference image can be determined more accurately.
Further, the change detection network in step S21 is specifically a change detection network based on a twin neural network and a full convolutional neural network;
the scene model implementation steps include:
s211, acquiring a residual network as a pre-training model by a migration learning method, wherein the components of a residual block of the residual network comprise a convolution layer, skin regularized BN and an activation function ReLU;
s212, extracting features from the reference image and the image to be detected through a twin neural network, and outputting variable positioning information with different degrees by combining convolution so as to capture rough information and detailed information in the image, wherein the twin neural network is based on the pre-training model;
s213, integrating the extracted features by using a full convolutional neural network FCN, adding and classifying by using the same normalized exponential function softmax to obtain a change region.
As can be seen from the above description, if two neural networks are used to extract features of the same attribute, the extracted features are not likely to be in a distribution domain, and the invention adopts a change detection network based on a twin neural network and a full convolution neural network, so that features of the same distribution domain of two input pictures can be extracted, thereby better judging the similarity of the two input pictures.
Further, the step S23 further includes the steps of:
s231, saving scene data with image change;
the scene data are the image to be detected and the reference image which have a change in comparison with each other;
step S2 further comprises the steps of:
s24, updating scene knowledge based on the saved scene data to obtain a scene model with higher precision.
From the above description, it can be seen that, at each use, scene data with image changes is saved to perform optimization update on the scene model, so as to iterate the scene model with higher precision.
Further, the step S24 includes the steps of:
s241, establishing a scene data accumulation module;
s242, the scene change detection module judges whether the stored scene data is changed, if the scene is changed, S243 and S244 are executed, otherwise, the next scene detection is waited, and S242 is executed;
s243, a task model tuning module uses the new scene data to tune the task model, so that the accuracy of the task model in a new scene is improved;
s244, when the scene data in the scene data accumulation module reaches a preset increment threshold, parameter tuning is performed by utilizing a change detection network, so that the model extracts scene knowledge in different scenes to obtain a scene model with higher precision and stronger generalization performance, and continuous scene adaptation is performed on continuously changed scenes.
From the above description, it can be seen that, after the scene data is accumulated to a preset increment threshold, the scene model is optimized and updated, and the parameter tuning is performed by using the change detection network, so that the accuracy and generalization of the scene knowledge horseshoe cake scene model in different scenes are intended.
Further, step S4 includes the steps of:
s41, collecting a large number of target area images in advance for model training, extracting feature vectors by utilizing a front convolution layer of a classification network, and carrying out model training by utilizing a classification model;
s42, performing saliency detection on the target area image to be detected, and dividing a foreground area;
s43, extracting feature vectors from a front convolution layer of the segmented foreground region input classification network;
s44, classifying the extracted feature vectors by using a trained classification model.
From the above description, training is performed on the classification model in advance, and the foreground region is segmented through saliency detection when in use, so as to input the classification network for classification.
Further, step S42 includes the steps of:
s421, generating a saliency map, and generating a pixel-level saliency value based on the color value histogram distribution of the input image, wherein the saliency value of each pixel point is a measure between the contrast of the pixel point and the contrast of all the remaining image pixels, and the measure formula is as follows:
wherein D (I k ,I i ) Representing the color distance of two pixel points, I represents the pixel point set, I k And I i Representing the kth pixel point and the ith pixel point in the pixel point set;
s422, carrying out weight region combination based on Gaussian kernel function weight generation to obtain an improved significance calculation formula:
wherein sigma s For a preset value, controlling the influence of the space weight, D s (r k ,r i ) Representing the Euclidean distance between the centers of two regions, r k And r i Representing two regions, w () represents the weight of a region, D r () Representing a color distance between two regions;
s423, performing binarization segmentation on the image according to the calculated saliency map and the set threshold value to obtain the foreground region.
As is clear from the above description, the foreground region is segmented by performing the saliency calculation through the above steps.
Further, the method further comprises the steps of:
s5, judging whether the area of the rust area exceeds a set threshold value, and if so, pushing alarm information of rust trend change to the appointed user side.
From the above description, by setting a threshold value, the tendency of the rust to expand is judged by the preset and varied size of the rust area to determine whether an alarm needs to be issued.
Further, step S5 includes the steps of:
s51, acquiring the actual area of the image to be detected through laser ranging and camera parameters;
s52, calculating the pixel ratio of the number of pixels in the change area to the image to be detected;
s53, calculating the actual area of the rust spot area according to the pixel ratio and the actual area of the image to be detected;
s54, judging whether the area of the rust area exceeds a set threshold value, and if so, pushing alarm information of rust trend change to a designated user side.
From the above description, it is known that the actual area of the change area, i.e., the rust area, is determined based on the actual area size of the reference image and the pixel ratio of the change area to the reference image, so as to determine whether an alarm is required.
Referring to fig. 2, a monitoring system for enlarging a rust area based on a neural network model includes a monitoring terminal and at least one monitoring device, wherein the monitoring terminal includes a processor, a memory and a computer program stored in the memory, and when the processor executes the computer program, the processor receives an image to be detected collected by the monitoring device, and realizes the steps in the monitoring method for enlarging the rust area based on the neural network model.
The method and the system for monitoring the rust area expansion based on the neural network model are suitable for automatic rust area expansion monitoring.
Referring to fig. 1, a first embodiment of the present invention is as follows:
a method for monitoring rust area expansion based on a neural network model comprises the following steps:
s1, acquiring an image to be detected and a reference image, wherein the reference image is an image of an initial state of a shooting area of the image to be detected;
s2, analyzing the image to be detected and the reference image based on a neural network model, determining that the image to be detected with the change is a target image, and marking a change area on the target image;
the step S2 includes the steps of:
s21, acquiring a large amount of scene knowledge in advance to perform model training, and acquiring a scene model based on a change Net of the change detection network;
the change detection network in the step S21 is specifically a change detection network based on a twin neural network and a full convolution neural network;
the scene model implementation steps include:
s211, obtaining a residual network as a pre-training model through a migration learning method, wherein the composition of a residual block of the residual network comprises a convolution layer, skin regularization BN and an activation function ReLU.
In this embodiment, resNet50 is utilized as a pre-training model.
S212, extracting features from the reference image and the image to be detected through a twin neural network, and outputting different degrees of change positioning information in combination with convolution to capture rough information and detailed information in the image, wherein the twin neural network is based on the pre-training model.
In this embodiment, two twin neural networks CNN1 and CNN2 are used to extract features from a reference image and an image to be detected, and combine convolution to output variable positioning information of different layers, so as to capture coarse information and detailed information in the image.
When it is desired to extract features of the same attribute, if feature extraction is performed on the picture using two neural networks, respectively, the extracted features are likely not to be in one distribution domain. In this case, feature extraction using a neural network may be considered and then compared. Therefore, the twin neural network can extract the characteristics of the same distribution domain of the two input pictures, and at the moment, the similarity of the two input pictures can be judged.
Two graphs are input from the network, and features are extracted from each graph by using a neural network respectively. The network comprises a ResNet structure, namely a base network comprising a plurality of modules, wherein each module comprises a convolution layer and a pooling layer, and image features under different scales are extracted; then, after the convolution layer and the pooling layer, each module has a full convolution layer with the convolution kernel size of 1x1, and the function of the full convolution layer is mainly used for adjusting the channel number so as to ensure that the characteristic channel numbers under different scales (under different modules) are consistent; a deconvolution layer is connected after the full convolution layer, so that the characteristic dimension is consistent with the size of the input image finally; the modules of the underlying network contain features at different scales of the image, the previous layer extracting detail information and the subsequent layer extracting representative global information, and so are described herein as: and = combining the convolutions to output varying positioning information of different layers to capture coarse and fine information in the image.
S213, integrating the extracted features by using a full convolutional neural network FCN, adding and classifying by using the same normalized exponential function softmax to obtain a change region.
The image can extract the characteristics under different scales on the basic network, the characteristic dimensions are different, and after the image passes through the full convolution layer with the kernel of 1x1 and the deconvolution layer, the image can form the characteristics with the same dimensions, and the characteristics need to be normalized at the moment, so that the characteristics are processed by a softmax function; the original output of the neural network is not a probability value, but is basically a vector value obtained by performing complex weighting and nonlinear processing on the input numerical value, so that the probability calculation of normalizing whether each pixel point pair (an image to be detected and a reference image) is changed or not is performed by using softmax.
S22, registering and aligning the image to be detected and the reference image; and enabling pixel points of the image to be detected and the reference image to correspond to each other as one by one as possible.
Because the camera can shake or fall in the process of adding the monitoring, the image registration is needed to improve the monitoring effect, and the registration mode is as follows:
1. extracting SIFT feature points of the image to be detected and the reference image;
2. finding matched pairs of feature points (at least three) by performing similarity metrics;
3. obtaining space coordinate transformation parameters through matched characteristic point pairs;
4. and carrying out image registration by using the coordinate transformation parameters, so that the pixel point coordinates of the two images can be in one-to-one correspondence.
S23, carrying out change detection on the image to be detected after registration and alignment operation is completed by using a scene model, and outputting coordinate position information of a change area.
The step S23 further includes the steps of:
s231, saving scene data with image change;
the scene data is the image to be detected and the reference image, which have a change in comparison.
S24, updating scene knowledge based on the stored scene data to obtain a scene model with higher precision;
the step S24 includes the steps of:
s241, establishing a scene data accumulation module;
s242, the scene change detection module judges whether the stored scene data is changed, if the scene is changed, S243 and S244 are executed, otherwise, the next scene detection is waited, and S242 is executed;
s243, a task model tuning module uses the new scene data to tune the task model, so that the accuracy of the task model in a new scene is improved;
s244, when the scene data in the scene data accumulation module reaches a preset increment threshold, parameter tuning is performed by utilizing a change detection network, so that the model extracts scene knowledge in different scenes to obtain a scene model with higher precision and stronger generalization performance, and continuous scene adaptation is performed on continuously changed scenes.
The training of the model has two key factors, one is a sample and the other is a hyper-parameter, and the optimization scheme is as follows:
1. firstly, designing a super parameter table, and setting the range of each parameter of the model such as learning rate, iteration times, batch size and the like;
2. each time data with a new scene change is detected, it is added to the scene data accumulation module
3. When the scene change data in the scene data accumulation module reaches a certain increment, combining new data and original data for training, training for multiple times according to the super-parameter table, and selecting a model with highest verification set precision for updating.
And S3, extracting the image of the target image according to the change area to obtain a target area image.
S4, determining whether the scene change category of the target image is a target category according to the target area image.
In this embodiment, the target area image is classified based on a classification model, and whether the target area image is a rusted area is determined.
In this embodiment, the classification of the target area where there is a change is classified into two categories, one is a change belonging to a rust area, and one is another change. And extracting the characteristics of the change area, inputting the extracted characteristics into a svm classifier trained in advance to judge whether the change area belongs to the rust area or not.
Step S4 includes the steps of:
s41, collecting a large number of target area images in advance to perform model training, extracting feature vectors by using a front convolution layer of the classification network, and performing model training by using the classification model.
In this embodiment, the feature vector is extracted by using a pre-convolution layer of a classification network MobileNet-v3, and model training is performed by using an SVM classifier.
S42, performing saliency detection on the target area image to be detected, and dividing a foreground area;
step S42 includes the steps of:
s421, generating a saliency map, and generating a pixel-level saliency value based on the color value histogram distribution of the input image, wherein the saliency value of each pixel point is a measure between the contrast of the pixel point and the contrast of all the remaining image pixels, and the measure formula is as follows:
wherein D (I k ,I i ) Representing the color distance of two pixel points, I represents the pixel point set, I k And I i Representing the kth pixel point and the ith pixel point in the pixel point set.
From the formula, it can be seen that the significance obtained is the same with the same pixel value, and a larger significance value is obtained with a larger number of colors.
The color distance is calculated specifically as follows:
the color distance refers to the color distance of two pixel points, and the formula is as follows:
wherein C1C 2 represents color 1 and color 2, and C1R represents the R channel of color 1. R, G, B the channels.
S422, carrying out weight region combination based on Gaussian kernel function weight generation to obtain an improved significance calculation formula:
wherein sigma s For a preset value, controlling the influence of the space weight, D s (r k ,r i ) Representing the Euclidean distance between the centers of two regions, r k And r i Representing two regions, w () represents the weight of a region, D r () Representing a color distance between two regions;
s423, performing binarization segmentation on the image according to the calculated saliency map and a set threshold value to obtain the foreground region;
s43, inputting the segmented foreground region into a front convolution layer of a classification network to extract feature vectors.
In this embodiment, the segmented foreground region is input into a MobileNet-v3 pre-convolution layer to extract feature vectors.
S44, classifying the extracted feature vectors by using a trained classification model SVM.
S5, judging whether the area of the rust spot area exceeds a set threshold value, if so, pushing alarm information of rust spot trend change to a designated user side;
step S5 includes the steps of:
s51, acquiring the actual area of the image to be detected through laser ranging and camera parameters;
s52, calculating the pixel ratio of the number of pixels in the change area to the image to be detected;
s53, calculating the actual area of the rust spot area according to the pixel ratio and the actual area of the image to be detected;
s54, judging whether the area of the rust area exceeds a set threshold value, and if so, pushing alarm information of rust trend change to a designated user side.
Referring to fig. 2, a second embodiment of the present invention is as follows:
the monitoring system 1 based on the neural network model and with the enlarged rust area comprises a monitoring terminal 2 and at least one monitoring device 5, wherein the monitoring terminal comprises a processor 3, a memory 4 and a computer program stored in the memory 4, and when the processor 3 executes the computer program, the processor receives an image to be detected collected by the monitoring device 5 and realizes the steps in the monitoring method based on the enlarged rust area of the neural network model.
In summary, according to the method and the system for monitoring the rust area expansion based on the neural network model, the positions and the types of the areas of the change areas are obtained by the neural network model obtained through pre-training through obtaining the image to be detected and the reference image, and whether the rust area expansion is detected is judged, so that the rust area expansion trend of the important area is monitored through the high-definition camera, the identification accuracy is high, the monitoring can be carried out continuously, the change of the post and the shift is not needed, the manual negligence is avoided, and the cost is low.
According to the method, the change areas of the image to be detected and the reference image are detected according to the trained model, scene knowledge can be updated in real time, and the model structure is continuously adjusted by utilizing the scene; and meanwhile, the target area is subjected to blocking processing, and the attention weight information is utilized to optimize the classification effect. In actual use, complex environmental factors may often cause failure of target recognition, in which case, the invention continuously updates scene knowledge according to actual scenes, and updates detection accuracy of the model. In the method, the target area is subjected to category judgment, the target area is divided based on significance, foreground areas such as rust spots are extracted, each target image is classified, and the prediction category of each sample image is obtained; the saliency weight of one image block is used for indicating the influence of the image block on determining the category to which the sample image belongs.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.

Claims (10)

1. The method for monitoring the rust area expansion by combining with the change area identification is characterized by comprising the following steps:
s1, acquiring an image to be detected and a reference image, wherein the reference image is an image of an initial state of a shooting area of the image to be detected;
s2, analyzing the image to be detected and the reference image based on a neural network model, determining that the image to be detected with the change is a target image, and marking a change area on the target image;
the step S2 includes the steps of:
s21, scene knowledge is collected in advance to carry out model training, and a scene model is obtained based on a change detection network ChangeNet;
the change detection network in the step S21 is specifically a change detection network based on a twin neural network and a full convolution neural network;
the scene model implementation steps include:
s211, acquiring a residual network as a pre-training model by a migration learning method, wherein the components of residual blocks of the residual network comprise a convolution layer, batch regularized BN and an activation function ReLU;
s212, extracting features from the reference image and the image to be detected through a twin neural network, and outputting variable positioning information of different layers by combining convolution so as to capture rough information and detailed information in the image, wherein the twin neural network is based on the pre-training model;
s213, integrating the extracted features by using a full convolutional neural network FCN, adding and classifying by using the same normalized exponential function softmax to obtain a change region;
s22, registering and aligning the image to be detected and the reference image:
extracting SIFT feature points of the image to be detected and the reference image;
finding matched characteristic point pairs by carrying out similarity measurement;
obtaining a space coordinate transformation parameter through the matched characteristic point pairs;
image registration is carried out by the coordinate transformation parameters, so that pixel point coordinates of two images can be in one-to-one correspondence, and registration alignment is realized;
s23, carrying out change detection on the image to be detected after registration alignment operation is completed by using a scene model, and outputting coordinate position information of a change area;
s3, extracting the image of the target image according to the change area to obtain a target area image;
s4, classifying the target area image based on a classification model, and judging whether the target area image is a rust area or not;
s5, judging whether the area of the rust area exceeds a set threshold value, and if so, pushing alarm information of rust trend change to the appointed user side.
2. The method for monitoring the enlargement of the area of rust in combination with the identification of change area according to claim 1, wherein step S23 further comprises the steps of:
s231, saving scene data with image change;
the scene data are the image to be detected and the reference image which have a change in comparison with each other;
step S2 further comprises the steps of:
s24, updating scene knowledge based on the saved scene data to obtain a scene model with higher precision.
3. A method of monitoring the enlargement of the area of rust in combination with the identification of areas of change as set forth in claim 2 wherein S24 includes the steps of:
s241, establishing a scene data accumulation module;
s242, the scene change detection module judges whether the stored scene data is changed, if the scene is changed, S243 and S244 are executed, otherwise, the next scene detection is waited, and S242 is executed;
s243, a task model tuning module uses the new scene data to tune the task model, so that the accuracy of the task model in a new scene is improved;
s244, when the scene data in the scene data accumulation module reaches a preset increment threshold, parameter tuning is performed by utilizing a change detection network, so that the model extracts scene knowledge in different scenes to obtain a scene model with higher precision and stronger generalization performance, and continuous scene adaptation is performed on continuously changed scenes.
4. The method for monitoring the expansion of the rust area in combination with the change area identification according to claim 3, wherein a super parameter table is preset, and the super parameter table limits the range of each parameter of the scene model;
the step S244 specifically includes:
when the scene change data in the scene data accumulation module reaches a certain increment, combining new data and original data for training, training for multiple times according to the super-parameter table, and selecting a model with highest verification set precision for updating.
5. The method for monitoring the area enlargement of rust spots combined with the change area identification according to claim 1, wherein the structure of the twin neural network is specifically as follows:
the twin neural network comprises a base network of a plurality of modules, each module comprises a convolution layer and a pooling layer, and image features under different scales are extracted;
after the convolution layer and the pooling layer, each module is provided with a full convolution layer with a convolution kernel size of 1x1, and the full convolution layer is used for adjusting the channel number so as to ensure that the characteristic channel numbers under different modules are consistent;
the full convolution layer is followed by a deconvolution layer such that the feature dimension is ultimately consistent with the input image size.
6. The monitoring system for rust area expansion combined with change area identification is characterized by comprising a monitoring terminal and at least one monitoring device, wherein the monitoring terminal comprises a processor, a memory and a computer program stored in the memory, and when the processor executes the computer program, the processor receives an image to be detected collected by the monitoring device and realizes the following steps:
s1, acquiring an image to be detected and a reference image, wherein the reference image is an image of an initial state of a shooting area of the image to be detected;
s2, analyzing the image to be detected and the reference image based on a neural network model, determining that the image to be detected with the change is a target image, and marking a change area on the target image;
the step S2 includes the steps of:
s21, scene knowledge is collected in advance to carry out model training, and a scene model is obtained based on a change detection network ChangeNet;
the change detection network in the step S21 is specifically a change detection network based on a twin neural network and a full convolution neural network;
the scene model implementation steps include:
s211, acquiring a residual network as a pre-training model by a migration learning method, wherein the components of residual blocks of the residual network comprise a convolution layer, batch regularized BN and an activation function ReLU;
s212, extracting features from the reference image and the image to be detected through a twin neural network, and outputting variable positioning information of different layers by combining convolution so as to capture rough information and detailed information in the image, wherein the twin neural network is based on the pre-training model;
s213, integrating the extracted features by using a full convolutional neural network FCN, adding and classifying by using the same normalized exponential function softmax to obtain a change region;
s22, registering and aligning the image to be detected and the reference image:
extracting SIFT feature points of the image to be detected and the reference image;
finding matched characteristic point pairs by carrying out similarity measurement;
obtaining a space coordinate transformation parameter through the matched characteristic point pairs;
image registration is carried out by the coordinate transformation parameters, so that pixel point coordinates of two images can be in one-to-one correspondence, and registration alignment is realized;
s23, carrying out change detection on the image to be detected after registration alignment operation is completed by using a scene model, and outputting coordinate position information of a change area;
s3, extracting the image of the target image according to the change area to obtain a target area image;
s4, classifying the target area image based on a classification model, and judging whether the target area image is a rust area or not;
s5, judging whether the area of the rust area exceeds a set threshold value, and if so, pushing alarm information of rust trend change to the appointed user side.
7. The system for monitoring the enlargement of the area of rust in combination with identification of areas of change as set forth in claim 6, further comprising the step of:
s231, saving scene data with image change;
the scene data are the image to be detected and the reference image which have a change in comparison with each other;
step S2 further comprises the steps of:
s24, updating scene knowledge based on the saved scene data to obtain a scene model with higher precision.
8. A monitoring system for enlarged scale in combination with variable area identification as claimed in claim 7, wherein S24 comprises the steps of:
s241, establishing a scene data accumulation module;
s242, the scene change detection module judges whether the stored scene data is changed, if the scene is changed, S243 and S244 are executed, otherwise, the next scene detection is waited, and S242 is executed;
s243, a task model tuning module uses the new scene data to tune the task model, so that the accuracy of the task model in a new scene is improved;
s244, when the scene data in the scene data accumulation module reaches a preset increment threshold, parameter tuning is performed by utilizing a change detection network, so that the model extracts scene knowledge in different scenes to obtain a scene model with higher precision and stronger generalization performance, and continuous scene adaptation is performed on continuously changed scenes.
9. The monitoring system for enlarged scale area in combination with change area identification of claim 8, wherein a super parameter table is preset, and the super parameter table limits the range of each parameter of the scene model;
the step S244 specifically includes:
when the scene change data in the scene data accumulation module reaches a certain increment, combining new data and original data for training, training for multiple times according to the super-parameter table, and selecting a model with highest verification set precision for updating.
10. The system for monitoring the area enlargement of rust associated with changing area identification of claim 6 wherein the structure of the twin neural network is specifically:
the twin neural network comprises a base network of a plurality of modules, each module comprises a convolution layer and a pooling layer, and image features under different scales are extracted;
after the convolution layer and the pooling layer, each module is provided with a full convolution layer with a convolution kernel size of 1x1, and the full convolution layer is used for adjusting the channel number so as to ensure that the characteristic channel numbers under different modules are consistent;
the full convolution layer is followed by a deconvolution layer such that the feature dimension is ultimately consistent with the input image size.
CN202311248212.7A 2023-08-14 2023-08-14 Monitoring method and system for rust area expansion by combining change area identification Pending CN117456441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311248212.7A CN117456441A (en) 2023-08-14 2023-08-14 Monitoring method and system for rust area expansion by combining change area identification

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202311013671.7A CN116740652B (en) 2023-08-14 2023-08-14 Method and system for monitoring rust area expansion based on neural network model
CN202311248212.7A CN117456441A (en) 2023-08-14 2023-08-14 Monitoring method and system for rust area expansion by combining change area identification

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202311013671.7A Division CN116740652B (en) 2023-08-14 2023-08-14 Method and system for monitoring rust area expansion based on neural network model

Publications (1)

Publication Number Publication Date
CN117456441A true CN117456441A (en) 2024-01-26

Family

ID=87911762

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202311013671.7A Active CN116740652B (en) 2023-08-14 2023-08-14 Method and system for monitoring rust area expansion based on neural network model
CN202311248212.7A Pending CN117456441A (en) 2023-08-14 2023-08-14 Monitoring method and system for rust area expansion by combining change area identification
CN202311247849.4A Pending CN117315578A (en) 2023-08-14 2023-08-14 Monitoring method and system for rust area expansion by combining classification network

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202311013671.7A Active CN116740652B (en) 2023-08-14 2023-08-14 Method and system for monitoring rust area expansion based on neural network model

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311247849.4A Pending CN117315578A (en) 2023-08-14 2023-08-14 Monitoring method and system for rust area expansion by combining classification network

Country Status (1)

Country Link
CN (3) CN116740652B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579790B (en) * 2024-01-16 2024-03-22 金钱猫科技股份有限公司 Construction site monitoring method and terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833220B (en) * 2017-11-28 2021-06-11 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN112766045B (en) * 2020-12-28 2023-11-24 平安科技(深圳)有限公司 Scene change detection method, system, electronic device and storage medium
CN116363495A (en) * 2021-12-22 2023-06-30 顺丰科技有限公司 Scene change detection method, device, computer equipment and storage medium
CN114549414A (en) * 2022-01-19 2022-05-27 深圳市比一比网络科技有限公司 Abnormal change detection method and system for track data
CN115100175B (en) * 2022-07-12 2024-07-09 南京云创大数据科技股份有限公司 Rail transit detection method based on small sample target detection

Also Published As

Publication number Publication date
CN116740652B (en) 2023-12-15
CN117315578A (en) 2023-12-29
CN116740652A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN110210463B (en) Precise ROI-fast R-CNN-based radar target image detection method
CN107506703B (en) Pedestrian re-identification method based on unsupervised local metric learning and reordering
CN106683119B (en) Moving vehicle detection method based on aerial video image
Jiao et al. A configurable method for multi-style license plate recognition
CN107633226B (en) Human body motion tracking feature processing method
CN106373146B (en) A kind of method for tracking target based on fuzzy learning
CN111709416A (en) License plate positioning method, device and system and storage medium
CN112257601A (en) Fine-grained vehicle identification method based on data enhancement network of weak supervised learning
Zhang et al. Road recognition from remote sensing imagery using incremental learning
CN108537286B (en) Complex target accurate identification method based on key area detection
CN111860439A (en) Unmanned aerial vehicle inspection image defect detection method, system and equipment
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN111311540A (en) Vehicle damage assessment method and device, computer equipment and storage medium
CN113052170B (en) Small target license plate recognition method under unconstrained scene
CN111046789A (en) Pedestrian re-identification method
CN116740652B (en) Method and system for monitoring rust area expansion based on neural network model
CN103353941B (en) Natural marker registration method based on viewpoint classification
CN113111727A (en) Method for detecting rotating target in remote sensing scene based on feature alignment
CN108932518A (en) A kind of feature extraction of shoes watermark image and search method of view-based access control model bag of words
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment
CN106682604B (en) Blurred image detection method based on deep learning
CN116704490B (en) License plate recognition method, license plate recognition device and computer equipment
CN112418262A (en) Vehicle re-identification method, client and system
CN116935073A (en) Visual image positioning method based on coarse and fine feature screening
CN107679528A (en) A kind of pedestrian detection method based on AdaBoost SVM Ensemble Learning Algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination