CN113762084A - Building night scene light abnormity detection method based on RetinaXNet - Google Patents
Building night scene light abnormity detection method based on RetinaXNet Download PDFInfo
- Publication number
- CN113762084A CN113762084A CN202110909371.1A CN202110909371A CN113762084A CN 113762084 A CN113762084 A CN 113762084A CN 202110909371 A CN202110909371 A CN 202110909371A CN 113762084 A CN113762084 A CN 113762084A
- Authority
- CN
- China
- Prior art keywords
- image
- formula
- retinaxnet
- follows
- night scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 53
- 238000012545 processing Methods 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000002159 abnormal effect Effects 0.000 claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims description 34
- 238000012549 training Methods 0.000 claims description 19
- 230000005856 abnormality Effects 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 150000001875 compounds Chemical class 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 2
- 238000005728 strengthening Methods 0.000 claims description 2
- 210000003128 head Anatomy 0.000 claims 4
- 210000001525 retina Anatomy 0.000 claims 2
- 238000013480 data collection Methods 0.000 abstract 1
- 230000010354 integration Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 11
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000017525 heat dissipation Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a building night scene light abnormity detection method based on RetinaXNet, wherein data collection adopts equalization processing, texture information of an image is reserved, and the complexity of the image is reduced. The input module of the RetinaXNet network reduces the video frame into 224 x 224 images, the main module adopts an improved residual error structure to extract the outline information of the images, the detection head module adopts an XNet network to strengthen the integration of the information to carry out classification and regression, and the output module restores the images to the original size again according to the reduction proportion. The RetinaXNet network provided by the invention can be used for detecting the position of a fault lamp in an image and classifying faults, realizes automatic detection of abnormity, improves the detection accuracy, reduces the condition of false detection, and provides a reliable method for detecting abnormal light of night scenes of buildings.
Description
Technical Field
The invention relates to the field of image processing and abnormity detection, in particular to a building night scene light abnormity detection method based on RetinaXNet.
Background
Along with the application of modern city science and technology and the high-speed development of economic strength, city lighting engineering plays a remarkable role in improving city environment, building livable cities, improving the overall functions of the cities, pulling internal requirements, promoting the development of city economy, improving the images of corresponding enterprises and the like. However, the arrangement of the night scene light of the building is exposed outdoors all the year round, and the night scene light of the building has frequent faults due to the problems of lamp aging, installation environment, heat dissipation and the like. The existing detection means mainly take manual inspection visual inspection as a main part, and the manual inspection has the defects of high cost, low real-time property, strong subjectivity and the like. With the development of artificial intelligence technology, the detection method based on deep learning can replace the traditional artificial-based method in some image-related fields, and the early-training network is adopted to detect the abnormal light of the night scene of the building, so that the detection accuracy is improved, the artificial subjectivity is reduced, and the detection automation is realized.
Disclosure of Invention
Aiming at the defects of high cost, low real-time performance, strong subjectivity and the like of the existing manual inspection, the building night scene light abnormity detection method based on RetinaXNet is provided, and the automation of night scene light abnormity detection is realized through a camera and a network model, and the detection accuracy is improved.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a building night scene light abnormity detection method based on RetinaXNet comprises the following steps:
1) constructing an initial night scene lamp image set C and sending the initial night scene lamp image set C to a GPU (graphic processing Unit) computing server for storage;
2) processing the image set C to obtain a data set, and dividing the data set into a training set E and a test set T;
3) constructing a RetinaXNet network model; the RetinaXNet network model comprises an input module, a trunk module and a detection head module;
4) optimizing the weight of the RetinaXNet network model by using a UFL function;
5) training a RetinaXNet network model;
6) and (3) night scene lamp abnormity detection, namely acquiring a frame to be detected through a camera, sending the frame to be detected into a RetinaXNet network, mapping an output result of the network back to an original image, and judging whether the night scene lamp is abnormal or not.
Further, the step 1) comprises the following steps:
1.1) acquiring video data V of night scene light by using a camera, wherein the camera is fixedly arranged at a place where night scene light detection is needed in advance;
1.2) extracting one frame of image from the video data V at intervals of delta t, and constructing an initial night scene light image set C which is recorded as C ═ I1,I2...In},IiThe number of the ith frame image is n, and the number of the night scene light images is n;
1.3) sending the initial night scene light image set C to a GPU computing server for storage.
Further, the step 2) includes the following steps:
2.1) for each frame image in the image set C, calculating the pixel occurrence probability p with the pixel value less than ji(j) The calculation formula is as follows:
pi(j) representing the probability of occurrence of gray scale greater than 0 and less than j in the ith frame image, ntNumber of pixels having a gray level smaller than j, nIThe total number of pixels of each frame image;
2.2) calculating a histogram result G (i) of each frame image in the set C, wherein the calculation formula is as follows:
g (i) is the gray level histogram processing result of the ith frame, wherein i is more than or equal to 0 and less than 256, pi(j) Representing the occurrence probability that the pixel is more than 0 and less than j in the ith frame image;
2.3) calculating a result h (v) of pixel equalization, equalizing the set of images C, where C ═ I1,I2...InAnd recording the processed image set as C', C ═ I1',I2'...In' }, the calculation formula is as follows:
wherein v is the pixel value of a single image I in the image set C, H (v) is the calculation result of v equalization, G (v) is the histogram processing result of the current v, G (v)minAs the minimum value of histogram processing, GmaxThe maximum value of histogram processing is shown, L is the gray level number, round represents the rounding of the pixel value result, and all pixels are calculated to obtain a single image I ', and the set is marked as C';
2.4) calculate the average pixel value a for the images in the set of images C', where C ═ { I ═ I1',I2'...In' }, the calculation formula is as follows:
where M is the length pixel value of the image, N is the width pixel value of the image, It'(r, C) are the coordinates of the image pixels in the image set C', and t is the number of the selected image;
2.5) carrying out missing and filling processing on the images in the image set C', wherein the missing and filling values are g (i, j), and obtaining a data set C ", and the calculation formula is as follows:
wherein g (I, j) is a missing or filled value, I ' (I, j) is a pixel value of the image I ' in the image set C ' with the coordinate of (I, j), and Th is a set threshold;
2.6) divide the data set C' into a training set E and a test set T in a ratio of m: n.
Further, the step 3) of constructing the RetinaXNet network model includes the following steps:
3.1) uniformly reducing the images in the training set E into images of r: l size by utilizing an input module, wherein r is the reduced length pixel valueL is the reduced width pixel value, and the transformed set is denoted as X ═ X1,x2...x3};
3.2) extracting the contour characteristics of each frame in the set X by using a trunk module through a residual error structure, wherein the residual error structure formula is as follows:
F(x)=f(x)+f(f(x))+f(f(f(x)))
wherein f (x) δ (W x) + c
In the formula, each frame image x is used as the input of a convolution layer, W is a parameter required to be learned by convolution, delta is an activation function, and F (x) is the output result of a residual error structure;
3.3) Structure F for setting input parameters for Classification fusion in the detection head ModuleCLSThe formula is as follows:
FCLS=δ[WCLS*F(x)]+c
wherein F (x) is the output result of the residual structure, WCLSAs training parameters, δ is the activation function, and c is a constant term;
3.4) Structure for setting nonlinear regression connection parameters in the detection head ModuleThe formula is as follows:
in the formula, FCLSAs input part for parameter fusion, W1,W2For training parameters, δ is an activation function, c is a constant term, and LCR connection is nonlinear connection and has the function of strengthening the connection between classification and regression;
in the formula, FregAre the original regression parameters of the network,are the enhanced regression parameters;
3.6) Structure for setting nonlinear classification connection parameters in the detection head ModuleThe formula is as follows:
in the formula, W3In order to train the parameters of the device,in order to have the regression parameters strengthened on,classifying the structure of the connection parameter for non-linearity;
in the formula, FclsIs the original classification parameter of the network,are enhanced classification parameters.
Further, the weight of the RetinaXNet network model is optimized by using the UFL function in step 4), and the optimization formula is as follows:
wherein y is a real label and takes the value of 0 or 1,is the dynamic adjustment factor parameter of the UFL, gamma is the rate of adjusting the sample weights, and alpha is the weight parameter.
Further, the step 5) of training the RetinaXNet network model includes the following steps:
5.1) calculating the accuracy rate accurve, wherein the calculation formula is as follows:
in the formula, accuracycacy is accuracy, TP represents that network output in the test set T is a positive sample, and the reference standard is also the number of positive samples; TN represents the network output as negative sample, and the reference standard is also the number of negative samples; FP represents that the network output is a positive sample, but the reference standard is the number of negative samples; FN represents the number of positive samples of which the network output is negative samples and the reference standard is positive samples;
5.2) calculating the recall rate recall, wherein the calculation formula is as follows:
in the formula, recall is recall rate. The TP represents the number of positive samples judged to be positive samples, namely the positive samples; FN represents the number of samples that are judged to be negative, but in fact positive;
5.3) calculating F1The value, the calculation formula is as follows:
in the formula, F1The method is a calculation result of the balance of accuracy and recall, the comprehensive accuracy and the recall rate;
5.4) judgment of F1If the value is less than t, turning to the step 5.1) for retraining, otherwise, turning to the step 6).
Further, the step 6) of determining whether the night scene light is abnormal includes the following steps:
6.1) mapping the network output result back to the original image, wherein the coordinate, length and height formula mapped to the original image is as follows:
in the formula (I), the compound is shown in the specification,x, y, W and H are the coordinates of the upper left corner of the detection frame and the length and height of the detection frame respectively, and xN、yN、WN、HNThe coordinates and the length and the height of the original image are mapped;
6.2) anomaly detection, wherein the detection formula is as follows:
in the formula, when warner is 0, no abnormality is found, and when warner is 1, an abnormality is found, an alarm is triggered.
Compared with the traditional method for detecting the abnormal light of the night scene of the building, the method has the advantages that the cost can be saved, the subjectivity of manual judgment is reduced, the detection efficiency is improved, and the good detection effect can be achieved under the conditions of illumination change, angle change, impaired definition and the like.
Drawings
Fig. 1 is a flowchart of a building night scene light abnormality detection method according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific embodiments, wherein the technical solutions and design principles of the present invention are described in detail below, but the present invention is not limited thereto. Any obvious improvement, replacement or modification by a person skilled in the art can be made without departing from the spirit of the invention.
The invention relates to a building night scene lamplight abnormity detection method based on RetinaXNet, and used equipment comprises an Internet of things camera, a GPU (graphic processing unit) calculation server and an alarm.
The building night scene light abnormity detection method based on RetinaXNet is shown in figure 1, and comprises the following steps:
1) an initial night scene light image set C is constructed and sent to a GPU computing server for storage, and the method comprises the following steps:
1.1) acquiring video data V of night scene light by using a camera, wherein the camera is fixedly arranged in advance at a place where night scene light detection is needed;
1.2) extracting one frame of image from the video data V at intervals of delta t, and constructing an initial night scene light image set C which is recorded as C ═ I1,I2...In},IiThe number of the ith frame image is n, and the number of the night scene light images is n;
1.3) sending the initial night scene light image set C to a GPU computing server for storage;
2) processing the image set C to obtain an image set C ', performing deletion and filling processing on images in the image set C' to obtain a data set C ', and dividing the C' into a training set E and a test set T; as a preferred embodiment of the invention, the method comprises the following steps:
2.1) for each frame image in the image set C, calculating the pixel occurrence probability p with the pixel value less than ji(j) The calculation formula is as follows:
pi(j) representing the probability of occurrence of gray scale greater than 0 and less than j in the ith frame image, ntNumber of pixels having a gray level smaller than j, nIThe total number of pixels of each frame image;
2.2) calculating a histogram result G (i) of each frame image in the set C, wherein the calculation formula is as follows:
g (i) is the gray level histogram processing result of the ith frame, wherein i is more than or equal to 0 and less than 256, pi(j) Indicating the occurrence probability of pixels larger than 0 and smaller than j in the ith frame image.
2.3) calculating a result h (v) of pixel equalization, equalizing the set of images C, where C ═ I1,I2...InAnd recording the processed image set as C', C ═ I1',I2'...In' }, the calculation formula is as follows:
wherein v is the pixel value of a single image I in the image set C, H (v) is the calculation result of v equalization, G (v) is the histogram processing result of the current v, G (v)minAs the minimum value of histogram processing, GmaxAnd (3) taking the maximum value of histogram processing, wherein L is the gray level number, round represents the rounding of the pixel value result, and a single image I 'is obtained after all pixels are calculated and is recorded as C' in a set.
2.4) calculate the average pixel value a for the images in the set of images C', where C ═ { I ═ I1',I2'...In' }, the formula is as follows:
where M is the length pixel value of the image, N is the width pixel value of the image, It'(r, C) are the coordinates of the image pixels in the image set C', and t is the number of the selected image;
2.5) carrying out missing and filling processing on the images in the image set C', wherein the missing and filling values are g (i, j), and obtaining a data set C ", and the formula is as follows:
wherein g (I, j) is a missing or filled value, I ' (I, j) is a pixel value of the image I ' in the image set C ' whose coordinate is (I, j), Th is a set threshold, and in the embodiment of the present invention, Th is 180;
2.6) the data set C "is divided into a training set E and a test set T in a ratio m: n, where m: n is 9: 1;
3) constructing a RetinaXNet network model; the RetinaXNet network model comprises an input module, a trunk module and a detection head module; as a preferred embodiment of the invention, the method comprises the following steps:
and 3.1) uniformly reducing the images in the training set E into images of r: l size by using an input module, wherein r is a reduced length pixel value, l is a reduced width pixel value, and a converted set is recorded as X ═ X1,x2...x3}; in a particular embodiment of the invention, r 224, l 224;
3.2) extracting the contour characteristics of each frame in the set X by using a trunk module through a residual error structure, wherein the residual error structure formula is as follows:
F(x)=f(x)+f(f(x))+f(f(f(x)))
wherein f (x) δ (W x) + c
In the formula, each frame image x is used as the input of the convolution layer, W is the parameter to be learned for convolution, δ is the activation function, and f (x) is the output result of the residual error structure. The method uses n repeated calculations, and in a specific embodiment of the invention, n is set to 50.
3.3) Structure F for setting input parameters for Classification fusion in the detection head ModuleCLSThe formula is as follows:
FCLS=δ[WCLS*F(x)]+c
wherein F (x) is the output result of the residual structure, WCLSTo train the parameters, δ is the activation function and c is a constant term.
3.4) Structure for setting nonlinear regression connection parameters in the detection head ModuleThe formula is as follows:
in the formula, FCLSAs input part for parameter fusion, W1,W2To train the parameters, δ is the activation function, c is a constant term, and the LCR linkage is a nonlinear linkage that acts to strengthen the link between classification and regression.
in the formula, FregAre the original regression parameters of the network,are the regression parameters that are strengthened.
3.6) Structure for setting nonlinear classification connection parameters in the detection head ModuleThe formula is as follows:
in the formula, W3In order to train the parameters of the device,in order to have the regression parameters strengthened on,the structure of the connection parameters is classified for non-linearity.
in the formula, FclsIs the original classification parameter of the network,are enhanced classification parameters.
4) Optimizing the weight of the RetinaXNet network model by using a UFL function; the formula is as follows:
wherein y is a real label and takes the value of 0 or 1,the dynamic adjustment factor parameter of UFL, γ is the rate of adjusting the sample weight, α is the weight parameter, in the embodiment of the present invention, γ is 2, and γ is 0.25;
5) the training network model, as a preferred embodiment of the present invention, comprises the following steps:
5.1) calculating the accuracy rate accurve, wherein the calculation formula is as follows:
in the formula, accuracycacy is accuracy, TP represents that network output in the test set T is a positive sample, and the reference standard is also the number of positive samples; TN represents the network output as negative sample, and the reference standard is also the number of negative samples; FP represents that the network output is a positive sample, but the reference standard is the number of negative samples; FN represents the number of positive samples for which the network output is negative, but the reference criterion is positive.
5.2) calculating the recall rate recall, wherein the calculation formula is as follows:
in the formula, recall is recall rate. The TP represents the number of positive samples judged to be positive samples, namely the positive samples; FN represents the number of samples that are judged to be negative, but in fact positive;
5.3) calculating F1The value, the calculation formula is as follows:
in the formula, F1The method is a calculation result of the balance of accuracycacy and recall, and the comprehensive accuracy and recall rate of the accuracycacy and recall.
5.4) judgment of F1If the value is less than t, turning to step 5.1) for retraining, otherwise turning to step 6), in a specific embodiment of the present invention, t is 0.6;
6) the method comprises the following steps of acquiring a frame to be detected through a camera, sending the frame to be detected to a network for detection, mapping an output result of the network back to an original image, and judging whether a fault lamp occurs, wherein the method comprises the following steps:
6.1) mapping the network output result back to the original image, wherein the coordinate, length and height formula mapped to the original image is as follows:
in the formula (I), the compound is shown in the specification,x, y, W and H are the coordinates of the upper left corner of the detection frame and the length and height of the detection frame respectively, and xN、yN、WN、HNTo the coordinates and length and height of the original.
6.2) anomaly detection, wherein the detection formula is as follows:
in the formula, when warner is 0, no abnormality is found, and when warner is 1, an abnormality is found, an alarm is triggered.
Claims (7)
1. A building night scene light abnormity detection method based on RetinaXNet is characterized by comprising the following steps:
1) constructing an initial night scene lamp image set C and sending the initial night scene lamp image set C to a GPU (graphic processing Unit) computing server for storage;
2) processing the image set C to obtain a data set, and dividing the data set into a training set E and a test set T; 3) constructing a RetinaXNet network model; the RetinaXNet network model comprises an input module, a trunk module and a detection head module;
4) optimizing the weight of the RetinaXNet network model by using a UFL function;
5) training a RetinaXNet network model;
6) and (3) night scene lamp abnormity detection, namely acquiring a frame to be detected through a camera, sending the frame to be detected into a RetinaXNet network, mapping an output result of the network back to an original image, and judging whether the night scene lamp is abnormal or not.
2. The method for detecting the light abnormality of the night scene of the building based on RetinaXNet as claimed in claim 1, wherein the step 1) comprises the steps of:
1.1) acquiring video data V of night scene light by using a camera, wherein the camera is fixedly arranged at a place where night scene light detection is needed in advance;
1.2) extracting one frame of image from the video data V at intervals of delta t, and constructing an initial night scene light image set C which is recorded as C ═ I1,I2...In},IiThe number of the ith frame image is n, and the number of the night scene light images is n;
1.3) sending the initial night scene light image set C to a GPU computing server for storage.
3. The method for detecting the light abnormality of the night scene of the building based on RetinaXNet as claimed in claim 1, wherein the step 2) comprises the steps of:
2.1) for each frame image in the image set C, calculating the pixel occurrence probability p with the pixel value less than ji(j) The calculation formula is as follows:
pi(j) representing the probability of occurrence of gray scale greater than 0 and less than j in the ith frame image, ntNumber of pixels having a gray level smaller than j, nIThe total number of pixels of each frame image;
2.2) calculating a histogram result G (i) of each frame image in the set C, wherein the calculation formula is as follows:
g (i) is the gray level histogram processing result of the ith frame, wherein i is more than or equal to 0 and less than 256, pi(j) Representing the occurrence probability that the pixel is more than 0 and less than j in the ith frame image;
2.3) calculating a result h (v) of pixel equalization, equalizing the set of images C, where C ═ I1,I2...InAnd recording the processed image set as C', C ═ I1',I2'...In' }, the calculation formula is as follows:
wherein v is the pixel value of a single image I in the image set C, H (v) is the calculation result of v equalization, G (v) is the histogram processing result of the current v, G (v)minAs the minimum value of histogram processing, GmaxThe maximum value of histogram processing is shown, L is the gray level number, round represents the rounding of the pixel value result, and all pixels are calculated to obtain a single image I ', and the set is marked as C';
2.4) calculate the average pixel value a for the images in the set of images C', where C ═ { I ═ I1',I2'...In' }, the calculation formula is as follows:
where M is the length pixel value of the image, N is the width pixel value of the image, It'(r, C) are the coordinates of the image pixels in the image set C', and t is the number of the selected image;
2.5) carrying out missing and filling processing on the images in the image set C', wherein the missing and filling values are g (i, j), and obtaining a data set C ", and the calculation formula is as follows:
wherein g (I, j) is a missing or filled value, I ' (I, j) is a pixel value of the image I ' in the image set C ' with the coordinate of (I, j), and Th is a set threshold;
2.6) divide the data set C' into a training set E and a test set T in a ratio of m: n.
4. The method for detecting the abnormal light of the night scene of the building based on RetinaXNet as claimed in claim 1, wherein the step 3) of constructing the RetinaXNet network model comprises the following steps:
and 3.1) uniformly reducing the images in the training set E into images of r: l size by using an input module, wherein r is a reduced length pixel value, l is a reduced width pixel value, and a converted set is recorded as X ═ X1,x2...x3};
3.2) extracting the contour characteristics of each frame in the set X by using a trunk module through a residual error structure, wherein the residual error structure formula is as follows:
F(x)=f(x)+f(f(x))+f(f(f(x)))
wherein f (x) δ (W x) + c
In the formula, each frame image x is used as the input of a convolution layer, W is a parameter required to be learned by convolution, delta is an activation function, and F (x) is the output result of a residual error structure;
3.3) Structure F for setting input parameters for Classification fusion in the detection head ModuleCLSThe formula is as follows:
FCLS=δ[WCLS*F(x)]+c
wherein F (x) is the output result of the residual structure, WCLSAs training parameters, δ is the activation function, and c is a constant term;
3.4) Structure for setting nonlinear regression connection parameters in the detection head ModuleThe formula is as follows:
in the formula, FCLSAs input part for parameter fusion, W1,W2For training parameters, δ is an activation function, c is a constant term, and LCR connection is nonlinear connection and has the function of strengthening the connection between classification and regression;
in the formula, FregAre the original regression parameters of the network,are the enhanced regression parameters;
3.6) Structure for setting nonlinear classification connection parameters in the detection head ModuleThe formula is as follows:
in the formula, W3In order to train the parameters of the device,in order to have the regression parameters strengthened on,classifying the structure of the connection parameter for non-linearity;
5. The method for detecting abnormal light in night scene of building based on Retina XNet as claimed in claim 1, wherein in the step 4), the UFL function is used to optimize the weight of the Retina XNet network model, and the optimization formula is as follows:
6. The method for detecting abnormal light in night scenes of buildings based on RetinaXNet as claimed in claim 1, wherein the step 5) of training the RetinaXNet network model comprises the steps of:
5.1) calculating the accuracy rate accurve, wherein the calculation formula is as follows:
in the formula, accuracycacy is accuracy, TP represents that network output in the test set T is a positive sample, and the reference standard is also the number of positive samples; TN represents the network output as negative sample, and the reference standard is also the number of negative samples; FP represents that the network output is a positive sample, but the reference standard is the number of negative samples; FN represents the number of positive samples of which the network output is negative samples and the reference standard is positive samples;
5.2) calculating the recall rate recall, wherein the calculation formula is as follows:
in the formula, recall is recall ratio TP and represents the number of positive samples judged to be positive samples, namely positive samples; FN represents the number of samples that are judged to be negative, but in fact positive;
5.3) calculating F1The value, the calculation formula is as follows:
in the formula, F1The method is a calculation result of the balance of accuracy and recall, the comprehensive accuracy and the recall rate;
5.4) judgment of F1If the value is less than t, turning to the step 5.1) for retraining, otherwise, turning to the step 6).
7. The method for detecting light abnormality of night scene of building based on RetinaXNet as claimed in claim 1, wherein said step 6) of determining whether the night scene light is abnormal includes the steps of:
6.1) mapping the network output result back to the original image, wherein the coordinate, length and height formula mapped to the original image is as follows:
in the formula (I), the compound is shown in the specification,x, y, W and H are the coordinates of the upper left corner of the detection frame and the length and height of the detection frame respectively, and xN、yN、WN、HNThe coordinates and the length and the height of the original image are mapped;
6.2) anomaly detection, wherein the detection formula is as follows:
in the formula, when warner is 0, no abnormality is found, and when warner is 1, an abnormality is found, an alarm is triggered.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110909371.1A CN113762084A (en) | 2021-08-09 | 2021-08-09 | Building night scene light abnormity detection method based on RetinaXNet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110909371.1A CN113762084A (en) | 2021-08-09 | 2021-08-09 | Building night scene light abnormity detection method based on RetinaXNet |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113762084A true CN113762084A (en) | 2021-12-07 |
Family
ID=78788759
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110909371.1A Pending CN113762084A (en) | 2021-08-09 | 2021-08-09 | Building night scene light abnormity detection method based on RetinaXNet |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113762084A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115294456A (en) * | 2022-08-23 | 2022-11-04 | 山东巍然智能科技有限公司 | Building lightening project detection method, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709511A (en) * | 2016-12-08 | 2017-05-24 | 华中师范大学 | Urban rail transit panoramic monitoring video fault detection method based on depth learning |
WO2019169895A1 (en) * | 2018-03-09 | 2019-09-12 | 华南理工大学 | Fast side-face interference resistant face detection method |
CN112200019A (en) * | 2020-09-22 | 2021-01-08 | 江苏大学 | Rapid building night scene lighting light fault detection method |
-
2021
- 2021-08-09 CN CN202110909371.1A patent/CN113762084A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709511A (en) * | 2016-12-08 | 2017-05-24 | 华中师范大学 | Urban rail transit panoramic monitoring video fault detection method based on depth learning |
WO2019169895A1 (en) * | 2018-03-09 | 2019-09-12 | 华南理工大学 | Fast side-face interference resistant face detection method |
CN112200019A (en) * | 2020-09-22 | 2021-01-08 | 江苏大学 | Rapid building night scene lighting light fault detection method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115294456A (en) * | 2022-08-23 | 2022-11-04 | 山东巍然智能科技有限公司 | Building lightening project detection method, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110059694B (en) | Intelligent identification method for character data in complex scene of power industry | |
CN111339882B (en) | Power transmission line hidden danger detection method based on example segmentation | |
CN111583198A (en) | Insulator picture defect detection method combining FasterR-CNN + ResNet101+ FPN | |
CN108615226B (en) | Image defogging method based on generation type countermeasure network | |
CN109934805B (en) | Water pollution detection method based on low-illumination image and neural network | |
CN109712127B (en) | Power transmission line fault detection method for machine inspection video stream | |
CN111784633A (en) | Insulator defect automatic detection algorithm for power inspection video | |
CN114743119B (en) | High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle | |
CN112365468A (en) | AA-gate-Unet-based offshore wind power tower coating defect detection method | |
CN106096603A (en) | A kind of dynamic flame detection method merging multiple features and device | |
CN111709888A (en) | Aerial image defogging method based on improved generation countermeasure network | |
CN115063648A (en) | Insulator defect detection model construction method and system | |
CN111540203B (en) | Method for adjusting green light passing time based on fast-RCNN | |
CN112365467A (en) | Foggy image visibility estimation method based on single image depth estimation | |
CN117451716A (en) | Industrial product surface defect detection method | |
CN115631407A (en) | Underwater transparent biological detection based on event camera and color frame image fusion | |
CN113762084A (en) | Building night scene light abnormity detection method based on RetinaXNet | |
CN115393747A (en) | Photovoltaic fault detection method based on deep learning | |
CN111507249A (en) | Transformer substation nest identification method based on target detection | |
CN113781388A (en) | Image enhancement-based power transmission line channel hidden danger image identification method and device | |
CN118038021A (en) | Transformer substation operation site foreign matter intrusion detection method based on improvement yolov4 | |
CN116994161A (en) | Insulator defect detection method based on improved YOLOv5 | |
CN107992799A (en) | Towards the preprocess method of Smoke Detection application | |
CN116612438A (en) | Steam boiler combustion state real-time monitoring system based on thermal imaging | |
CN112232307B (en) | Method for detecting wearing of safety helmet in night vision environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20240418 Address after: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000 Applicant after: Shenzhen Wanzhida Technology Transfer Center Co.,Ltd. Country or region after: China Address before: 212013 No. 301, Xuefu Road, Zhenjiang, Jiangsu Applicant before: JIANGSU University Country or region before: China |