CN111640117A - Method for searching position of leakage source of building - Google Patents

Method for searching position of leakage source of building Download PDF

Info

Publication number
CN111640117A
CN111640117A CN202010497448.4A CN202010497448A CN111640117A CN 111640117 A CN111640117 A CN 111640117A CN 202010497448 A CN202010497448 A CN 202010497448A CN 111640117 A CN111640117 A CN 111640117A
Authority
CN
China
Prior art keywords
building
leakage source
leakage
discriminator
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010497448.4A
Other languages
Chinese (zh)
Other versions
CN111640117B (en
Inventor
冯先勇
邹倩颖
聂绍贵
李岩
刘俸宇
韩思宇
孙治秋
吴霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bazhong Zhengda Waterproof And Heat Insulation Engineering Co ltd
Sichuan Zhengda New Material Technology Co ltd
Original Assignee
Bazhong Zhengda Waterproof And Heat Insulation Engineering Co ltd
Sichuan Zhengda New Material Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bazhong Zhengda Waterproof And Heat Insulation Engineering Co ltd, Sichuan Zhengda New Material Technology Co ltd filed Critical Bazhong Zhengda Waterproof And Heat Insulation Engineering Co ltd
Priority to CN202010497448.4A priority Critical patent/CN111640117B/en
Publication of CN111640117A publication Critical patent/CN111640117A/en
Application granted granted Critical
Publication of CN111640117B publication Critical patent/CN111640117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/08Construction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method for searching the position of a building leakage source. In the method for searching the position of the leakage source of the building, on one hand, a thermal imaging picture is processed, and the thermal imaging picture is segmented by using an FCN-DARG segmentation algorithm to find the lowest temperature point, so that a possible leakage source is found; on the other hand, a possible leakage source in the building original image is searched by utilizing a dual-discriminator generation type countermeasure network; and finally, carrying out comprehensive judgment on the two results so as to find out a real leakage source. Compared with the method for judging the leakage source position by only using the thermal image, the method for searching the leakage source position of the building can greatly improve the accuracy of searching the leakage position.

Description

Method for searching position of leakage source of building
Technical Field
The invention relates to the technical field of building leakage detection, in particular to a method for searching a position of a building leakage source.
Background
With the high-speed development of the country, high-rise buildings are more and more, and the quality guarantee is also very important while the building efficiency is improved. If the waterproof design of the building is unreasonable or the waterproof material is not used properly, the water leakage phenomenon is easy to occur after the corrosion weathering accumulated in a day and a month. The use of infrared thermal imaging to detect building leaks to find a source of the leak is a widely used method. In the theoretical research and case analysis of nondestructive leak detection technology of Zhang Bao gang et al, it is mentioned that the infrared thermograph with different characteristics can be obtained by the temperature and radiance of the object surface, and the position of the internal defect can be obviously seen by the infrared thermograph. One approach is also mentioned in the article by Wuhanbin et al: the thermograph is processed by utilizing digital image processing methods such as multi-threshold segmentation, morphological filtering and connected body searching, so that a water seepage area is obtained, and the water seepage type and the water seepage position information are judged. The application of the Chengger infrared thermal image detection technology in civil engineering also mentions that the infrared thermal imaging technology is applied to the aspects of panoramic analysis of large buildings, moisture leakage detection, roof leakage position and pipeline leakage detection. However, in the above method, the leak source is determined by a simple method such as a simple thermography or a simple division of a thermography. Such a method is susceptible to problems such as a complicated environment, thereby reducing the accuracy of leak source finding.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for searching the position of a building leakage source.
The method for finding the position of the leakage source of the building can comprise the following steps:
segmenting a building thermal imaging graph by an FCN-DARG segmentation algorithm, and finding a lowest temperature point so as to obtain a first possible leakage source position;
acquiring a second possible leakage source position from the building original image of the building through the dual-discriminator generation type confrontation network;
and comprehensively judging the first possible leakage source and the second possible leakage source through Gaussian distribution to obtain the accurate leakage source position.
According to a preferred embodiment of the present invention, the segmenting the building thermal imaging map by the FCN-DARG segmentation algorithm to find the lowest temperature point, thereby obtaining the first possible leakage source location, includes the following steps:
s110: acquiring the characteristics of an original infrared image of a building by using the FCN, and performing semantic prediction classification from a pixel level by convolution calculation from different stages of a convolution network to form an FCN coarse segmentation result;
s120: taking the minimum rectangular frame of the target area obtained from the FCN rough segmentation result to obtain the position of the target area;
s140: performing secondary segmentation on the original infrared image by using the minimum rectangular frame and using a self-adaptive region growing algorithm to form a secondary segmentation result;
s150: and fusing the FCN rough segmentation result and the secondary segmentation result to obtain a lowest temperature point, so as to obtain a first possible leakage source position.
According to a preferred embodiment of the present invention, the adaptive region growing algorithm comprises the steps of:
s141: according to the FCN rough segmentation result, taking a minimum external rectangular frame of the target segmentation result, positioning according to the rectangular frame, and extracting an image at the position of the rectangular frame on the original infrared image;
s142: taking the centroid of the rectangular frame as an initial seed point of the region growth, and defining 8-field pixel points of the seed point as an initial growth region s according to the 8-field mode0Calculating S0Pixel mean value m of0Sum dynamic difference D0And then S is0Setting the gray value of all pixels as m0
S143: iterating in step 142, each time calculating D according to formula (1)nAnd mnTo determine the threshold range omega of the new growable pixeln
Ωn=[mn-1-θDn-1,mn-1+θDn-1](1)
In formula (1), θ is an adjustment factor;
Dnfor the dynamic difference, the definition is shown in formula (2):
Figure BDA0002522044600000031
wherein x is1,x2,…,xnFor each iteration newly added pixel gray value, mnThe average gray value of all pixel points in the grown region after the nth iteration;
s144: if the region S after the nth growthnGrowth stops when no further expansion or a predetermined threshold is reached.
According to a preferred embodiment of the present invention, the FCN rough segmentation result and the second segmentation result are fused to obtain a lowest temperature point, so as to obtain a first possible leakage source location, which is as follows:
let FCN partition result area be SFCN(ii) a The area of the segmentation result obtained by the dynamic self-adaptive region growing is SDARGIn order to unify the gray value to facilitate image superposition, the pixels of the two are simultaneously valued as 1, two segmentation results are superposed, and finally the value of the result of the fused image I (x, y) is determined according to a formula (3):
Figure BDA0002522044600000032
according to a preferred embodiment of the present invention, the obtaining of the second possible leakage source location from the building original image of the building through the dual-discriminator generating confrontation network includes the following steps:
step S210: transmitting the building original image into a generator, and generating a picture of a leakage source by the generator;
step S220: the picture of the leakage source is transmitted into a first discriminator, and the first discriminator judges the picture of the leakage source to obtain a first judgment result;
step S230: the picture of the leakage source is transmitted into a second discriminator, and the second discriminator judges the picture of the leakage source to obtain a second judgment result; wherein the first discriminator and the second discriminator parameters are not shared;
step S240: and integrally judging the first judgment result and the second judgment result to obtain the second possible leakage source position.
According to a preferred embodiment of the present invention, a generator training step is further included before the step S210;
a first discriminant training step is further included before the step S220;
a second discriminator training step is also included before the step S230.
According to a preferred embodiment of the present invention, the first discriminator training step includes:
s251: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s252: obtaining n noise samples from a distribution1,z2,...,zn};
S253: obtaining n generated data from a generator
Figure BDA0002522044600000041
S254: obtaining n random leakage source pictures from a database
Figure BDA0002522044600000042
S255: substituting the data collected in steps S251-S254 into formula (14) and formula (15), and adjusting the parameter thetadTo maximize it;
Figure BDA0002522044600000051
Figure BDA0002522044600000052
in the formula (14) and the formula (15),
Figure BDA0002522044600000053
showing that the leakage building original image corresponds to the corresponding leakage source;
Figure BDA0002522044600000054
representing a leakage source generated by a generator corresponding to the leakage building original image;
Figure BDA0002522044600000055
and showing that the leakage building original image corresponds to a non-corresponding random leakage source picture.
According to a preferred embodiment of the present invention, the second discriminator training step includes:
s261: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s261: one sample is taken from the sample set collected in step S261 (c)m,xm);
S262: c is tomTransmitting into CNN for calculation to obtain a result Om
S263: determining the output value omWith a true target value xmA difference of (d);
s264: the weight value is adjusted by using the BP algorithm, and the adopted formula is shown as formula (16), formula (17), formula (18) and formula (19):
i=vi(1-vi)(xm i-vi) (16)
ki(17)
Figure BDA0002522044600000056
wji←wjijOm ji(19)
wherein:ierror, v, for each node in the neural networkiAs output of the output layer, αiFor hidden layer output, wkiFor the input layer to hidden layer connection weight, μ is the learning rate constant, wjiIs the weight of i node to j node, xjiIs the value that the inode passed to the j node.
According to a preferred embodiment of the present invention, the generator training step comprises:
s271: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s272: obtaining n noise samples from a distribution1,z2,...,zn};
S273: obtaining n generated data from a generator
Figure BDA0002522044600000061
S274: obtaining n random leakage source pictures from a database
Figure BDA0002522044600000062
S275: substituting the sample data collected in steps S271-S274 into formula (20) and formula (21), and adjusting the generator parameter thetagTo maximize it;
Figure BDA0002522044600000063
Figure BDA0002522044600000064
wherein,
Figure BDA0002522044600000065
the score obtained by transmitting the data generated by the generator into the first discriminator is represented;
Figure BDA0002522044600000066
the score obtained by transmitting the data generated by the generator into the second discriminator is represented;
θgparameters representing the incoming generator;
Figure BDA0002522044600000067
represents the gradient (gradient) multiplied by the Learning rate (Learning rate).
According to a preferred embodiment of the present invention, the comprehensive determination of the first possible leakage source and the second possible leakage source by the gaussian distribution includes:
importing the leakage source frame position and the fraction obtained by the FCN-DARG segmentation algorithm and the dual-discriminant generation countermeasure network into a result after normal distribution, and taking the leakage source position picture information with more than a specified fraction as an output result;
the loss calculation is performed on the output result according to equation (23):
L(reg)(ti,t′i)=R(ti-t′i) (23)
wherein t isi,t′iRepresenting the predicted values and the real situation of different boxes.
L(reg)Represents ti,t′iThe regression loss value of (1);
r represents the smoothL1 function.
Compared with the prior art, the method for searching the position of the leakage source of the building, provided by the embodiment of the invention, has the following beneficial effects:
the method for searching the position of the building leakage source not only judges through the thermograph, but also searches the position of the building leakage source by introducing the generative countermeasure network. The leakage source is respectively searched through the double discriminators, then the two results are integrated to obtain a more accurate leakage source, and if the results obtained by the two discriminators are greatly different, the result of thermal image segmentation is introduced to carry out comprehensive judgment. Compared with the method of judging the leakage position by only using the thermal image, the method can greatly improve the accuracy of leakage position searching.
Additional features of the invention will be set forth in part in the description which follows. Additional features of some aspects of the invention will become apparent to those of ordinary skill in the art upon examination of the following description and accompanying drawings or may be learned by the manufacture or operation of the embodiments. The features of the present disclosure may be realized and attained by practice or use of various methods, instrumentalities and combinations of the specific embodiments described below.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention without limiting the invention. Like reference symbols in the various drawings indicate like elements. Wherein,
FIG. 1 is a basic flow diagram illustrating a method of finding a location of a building leak source according to some embodiments of the invention;
FIG. 2 is a basic flow diagram of an FCN-DARG segmentation algorithm in a method of finding a location of a building leakage source according to some embodiments of the present invention;
FIG. 3 is a schematic diagram of the process of FCN convolution and deconvolution upsampling in the method of finding a location of a building leakage source according to some embodiments of the present invention;
FIG. 4 is a schematic diagram of a prior art generative countermeasure network;
FIG. 5 is a schematic diagram of a dual-arbiter generated countermeasure network in a method of finding a location of a building leak source according to some embodiments of the present invention;
fig. 6 is a schematic diagram of the basic structure of CNN in the method for finding the location of the leakage source of the building according to some embodiments of the present invention;
FIG. 7 is a schematic diagram of a three-layer perceptron structure of a BP algorithm in a method of finding a location of a building leakage source according to some embodiments of the present invention;
fig. 8 is a flow chart of a process for multiple possible leak source locations using a gaussian distribution in a method of finding a location of a building leak source according to some embodiments of the invention;
fig. 9 is a schematic diagram of a method for finding the location of the leak source frames of a building with the normal distribution of all leak source frame locations to the score according to some embodiments of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that if the terms "first", "second", etc. are used in the description and claims of the present invention and in the accompanying drawings, they are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances in order to facilitate the description of the embodiments of the invention herein. Furthermore, if the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a method for searching the position of a leakage source of a building. In the method for searching the position of the leakage source of the building, on one hand, a thermal imaging picture is processed, and the thermal imaging picture is segmented by using an FCN-DARG segmentation algorithm to find the lowest temperature point, so that a possible leakage source is found; on the other hand, a possible leakage source in the building original image is searched by utilizing a dual-discriminator generation type countermeasure network; and finally, carrying out comprehensive judgment on the two results so as to find out a real leakage source.
Compared with the method for judging the leakage source position by only using the thermal image, the method for searching the leakage source position of the building can greatly improve the accuracy of searching the leakage position.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1, the method for finding the location of the leakage source of the building includes:
segmenting a building thermal imaging graph by an FCN-DARG segmentation algorithm, and finding a lowest temperature point so as to obtain a first possible leakage source position;
acquiring a second possible leakage source position from the building original image of the building through the dual-discriminator generation type confrontation network;
and comprehensively judging the first possible leakage source and the second possible leakage source through Gaussian distribution to obtain the accurate leakage source position.
By the thermodynamic theorem, it is known that any substance cannot reach absolute zero through a limited number of steps, and all substances above absolute zero continuously emit radiation energy (thermal radiation) in the form of electromagnetic waves to the outside. This radiant energy is typically between 0.76 μm and 1000 μm in wavelength, longer than red visible light and shorter than microwaves, and is often referred to as infrared or infrared. For the same substance, the radiance and the infrared radiation characteristic are different under the condition of different temperatures, and the temperature and the infrared radiation characteristic are closely related, so that the purpose of measuring the temperature of an object can be achieved by measuring the infrared radiation characteristic emitted by the object through a thermal imaging related instrument, and then the infrared radiation characteristic is displayed in a visible light mode on an instrument display screen. The above is the basic principle of infrared thermal imaging.
The infrared thermal imaging technology is widely applied in various aspects, for example, as early as 1975, a Canada forest research center uses a helicopter to carry an AGA750 portable thermal imaging instrument, and discovers hidden fires 15 times in forest fire seasons, so that the probability of forest fires is greatly reduced. Similarly, infrared thermal imaging technology is also used for fault diagnosis and other aspects.
The method for detecting the leakage of the building by utilizing the infrared thermal imaging is also an application of the infrared thermal imaging principle, and the temperature of the surface of the building is judged by detecting the size of the radiation energy of the surface of the building, so that whether the leakage phenomenon exists is judged. The uneven settlement of the house causes the cracking of the wall surface, the erosion of rain, snow and mold, the expansion and contraction of the wall surface due to weather, the flaws during the construction and other factors possibly cause the leakage. Once the leakage phenomenon takes place for the building surface, the surface temperature of the portion of soaking into water often can be less than the temperature of dry part, scans through thermal imaging camera, can tentatively judge the region and the leakage area of building leakage.
In this embodiment, the segmenting the building thermal imaging map by the FCN-DARG segmentation algorithm to find the lowest temperature point, so as to obtain the first possible leakage source location, includes the following steps:
s110: acquiring the characteristics of an original infrared image of a building by using the FCN, and performing semantic prediction classification from a pixel level by convolution calculation from different stages of a convolution network to form an FCN coarse segmentation result;
s120: taking the minimum rectangular frame of the target area obtained from the FCN rough segmentation result to obtain the position of the target area;
s140: performing secondary segmentation on the original infrared image by using the minimum rectangular frame and using a self-adaptive region growing algorithm to form a secondary segmentation result;
s150: and fusing the FCN rough segmentation result and the secondary segmentation result to obtain a lowest temperature point, so as to obtain a first possible leakage source position.
Specifically, for the situation that infrared image segmentation is not ideal under a complex background, an FCN and dynamic adaptive region growing infrared image segmentation algorithm (FCN-DARG) is fused. The basic flow of the FCN-DARG segmentation algorithm is shown in FIG. 2.
The algorithm is divided into two modules, namely a rough segmentation module and a fine segmentation module. The rough segmentation module mainly utilizes FCN to obtain original image features, and performs semantic prediction classification from pixel levels by convolution calculation from different stages of a convolution network to form a rough segmentation result. And (3) subdividing the modules, then obtaining the minimum rectangular frame of the target area obtained by FCN segmentation, obtaining the position of the target area, then carrying out secondary segmentation on the image result by using the rectangular frame and using a self-adaptive area growth algorithm on the original image, and finally fusing the two to obtain the final result.
The FCN rough segmentation process comprises the following specific steps:
in order to obtain the basic outline of the target area and eliminate the influence of a possible complex background environment, FCN rough segmentation needs to be adopted for the infrared thermograph. The FCN rough segmentation adopts an algorithm of an FCN network structure, the FCN structure is divided into 8 layers in total, all layers are convolution layers, and the convolution layers (conv) and the pooling layers (pool) are alternately connected. After convolution, the image becomes smaller and smaller, for example, an image is reduced by 2, 4, 8, 16 and 32 times from the first layer to the fifth layer after 5 layers of convolution. In order to restore the original image resolution, the output feature map needs to be up-sampled, but after 5 times of convolution, the resolution of the image is reduced by 32 times, and in this case, the result of up-sampling by 32 times is FCN-32s, but the segmentation accuracy is rapidly reduced. In order to compensate for the loss of image precision, a multi-level fusion mode is adopted for up-sampling: firstly, performing 2 times of upsampling on a feature map output after the 7 th layer of convolution, and then fusing the upsampling with a feature map output by the 4 th layer of pooling layer to form FCN-16 s; and then performing 2 times of upsampling on the feature map which is just fused, fusing the feature map with the feature map output by the 3 rd layer pooling layer, and performing 8 times of upsampling to obtain FCN-8 s. FCN-8s split the best compared to FCN-32s and FCN-16 s. Fig. 3 shows the process of FCN convolution and deconvolution up-sampling.
The region growing algorithm is firstly proposed by Levine and the like, the algorithm idea is simple and easy to realize, and the segmentation result can reserve target details to the greatest extent, so that the region growing algorithm is particularly suitable for infrared images with prominent brightness characteristics and mostly connected regions. However, the initial seed points of the traditional region growing algorithm need to be manually selected or designated, and the automatic segmentation requirement cannot be met. Based on this, a dynamically adaptive region growing algorithm is proposed. The region growing method has the characteristic of preserving image details, but easily causes over-segmentation or under-segmentation. The characteristic has little influence on a simple background image, but for a complex background image, the target is difficult to be accurately described due to over-segmentation or under-segmentation, so that the target identification effect cannot be achieved. If the whole image is subjected to region growing method target segmentation, under the condition of a complex background, the position of the target is difficult to locate, and over-segmentation is easily caused due to the influence of factors such as growing sequence, seed pixel selection and the like, so that the target is submerged. However, if the seed growing point of the region where the target is located is determined in a manual interaction mode, the significance of automatic segmentation is lost.
The position of the target region can be easily found on the original image by rough segmentation results obtained by FNC (full convolution network) and then determining the minimum rectangular frame at the segmentation target. This is of great significance to the final result of the region growing method, on one hand, it can avoid the situation that the target cannot be identified due to the global growth, on the other hand, in the case of the target region determination, the best position of the seed pixel is necessarily in the centroid of the holding region. On the basis, a finer segmentation result can be obtained by performing region growing segmentation on the original image.
In this embodiment, the adaptive region growing algorithm includes the following steps:
s141: according to the FCN rough segmentation result, taking a minimum external rectangular frame of the target segmentation result, positioning according to the rectangular frame, and extracting an image at the position of the rectangular frame on the original infrared image;
s142: taking the centroid of the rectangular frame as an initial seed point of the region growth, and defining 8-field pixel points of the seed point as an initial growth region S according to the 8-field mode0Calculating S0Pixel mean value m of0Sum dynamic difference D0And then S is0Setting the gray value of all pixels as m0
S143: iterating in step 142, each time calculating D according to formula (1)nAnd mnTo determine the threshold range omega of the new growable pixeln
Ωn=[mn-1-θDn-1,mn-1+θDn-1](1)
In formula (1), θ is an adjustment factor; the larger the value of theta is, the more sufficient the region grows, but the over-segmentation phenomenon is easy to occur; on the contrary, the smaller the value of the theta factor is, the more easily the under-segmentation phenomenon occurs.
DnFor the dynamic difference, the definition is shown in formula (2):
Figure BDA0002522044600000131
wherein x is1,x2,…,xnFor each iteration newly added pixel gray value, mnThe average gray value of all pixel points in the grown region after the nth iteration; after each iteration, the gray values of the grown regions are changed, DnDynamically adjusting the threshold range omega of the growing pixel pointnAlso changing dynamically. The algorithm has stronger automatic adaptability, and can effectively relieve the phenomenon of under-segmentation or over-segmentation.
S144: if the region S after the nth growthnGrowth stops when no further expansion or a predetermined threshold is reached.
Since the region growing method can preserve the image details to the maximum extent (by adjusting the value of θ), the coarse segmentation result obtained by the FCN can be corrected by using the second segmentation result.
In this embodiment, the FCN rough segmentation result and the secondary segmentation result are fused to obtain a lowest temperature point, so as to obtain a first possible leakage source location, which is specifically as follows:
let FCN partition result area be SFCN(ii) a The area of the segmentation result obtained by the dynamic self-adaptive region growing is SDARGIn order to unify the gray value to facilitate image superposition, the pixels of the two are simultaneously valued as 1, two segmentation results are superposed, and finally the value of the result of the fused image I (x, y) is determined according to a formula (3):
Figure BDA0002522044600000141
as can be seen from equation (3), the fused image is mainly the contour of the region growing segmentation, because, for the infrared image, once a rectangular target region is determined, the contrast between the target and the background in the region is already obvious, which is equivalent to converting a complex background image into a simple background image, and the region growing segmentation effect is far better than that of the FNC algorithm. However, since the detail is retained by the region growing, the fused result image may have a hole, and needs to be processed. Here, the final segmentation image is obtained by performing a single Closing Operation (CO) using a morphological Operation element of 3 × 3 by using a morphological processing method to fill the hole.
The basic idea of generating a Generative adaptive network (Generative adaptive network) is derived from two-player zero-sum gaming. In GAN, only one generator and one discriminator are typically included, as shown in fig. 4.
In the GAN, a generator generates new data based on source data and submits the new data to a discriminator for discrimination, and the discriminator judges input data to find out which data are real data and which data are generated. The GAN will continue to train according to the above steps until the data generated by the generator can completely fool the arbiter. In order to win in the game, the generator and the discriminator can be continuously trained to improve the respective generating capability and judging capability, and finally a Nash equilibrium state is achieved.
The GAN training formula is shown in formula (4):
Figure BDA0002522044600000142
wherein: e is the expectation value, PdataFor true data distribution case, PzIndicating the distribution of the generated data. In training, we need the less the loss value (V (D, G)) the better for the generator, and the more the loss value the better for the discriminator.
The principle of the generative countermeasure network is shown in fig. 4, source data is input into a generator G, the generator generates data G (z), then the data G (z) and real data x are transmitted into a discriminator D together to obtain a result D (G (z)), whether the data are real data or not is judged, and finally the generator and the discriminator are adjusted according to the judgment result until the discriminator cannot judge whether the input data are real data or generated data, and at the moment, the generator and the discriminator reach a balanced state.
The GAN arbiter inputs the data generated by the generator and outputs a value between 0 and 1 to represent the similarity between the data generated by the generator and the real data, and the closer the arbiter outputs the value to 0, the more likely the input data is generated by the generator, and the closer to 1, the more likely the input data is real data.
In this embodiment, a new discriminator D is added to the conventional GAN structure2And forming a generating type countermeasure network with double discriminators, wherein the two discriminators in the network are respectively trained, and parameters are not shared.
The method for acquiring a second possible leakage source position from building original images of a building through a dual-discriminator generation countermeasure network comprises the following steps:
step S210: transmitting the building original image into a generator, and generating a picture of a leakage source by the generator;
step S220: the picture of the leakage source is transmitted into a first discriminator, and the first discriminator judges the picture of the leakage source to obtain a first judgment result;
step S230: the picture of the leakage source is transmitted into a second discriminator, and the second discriminator judges the picture of the leakage source to obtain a second judgment result; wherein the first discriminator and the second discriminator parameters are not shared;
step S240: and integrally judging the first judgment result and the second judgment result to obtain the second possible leakage source position.
Further, a generator training step is included before the step S210; a first discriminant training step is further included before the step S220; a second discriminator training step is also included before the step S230.
Specifically, the overall process applied herein by the dual-arbiter generated countermeasure network is: the building original image is transmitted into a generator, the generator generates a picture of a leakage source, then the picture is transmitted into two discriminators respectively, the two discriminators respectively judge the leakage source to obtain two results, and then the two results are integrally judged to obtain a real leakage source.
In order to make the generated leakage source picture more real, a generator and a discriminator need to be trained respectively, the basic idea of optimizing the GAN is to make D and G perform iterative optimization, D is fixed when G is optimized, G is fixed when D is optimized, and the whole process needs to be in a convergence state when G is optimized.
For training of multiple discriminators, a first discriminator D1And a second discriminator D2As two independent discriminators, are not shared. During training, G, D1,D2Following equation (5):
Figure BDA0002522044600000161
in equation (5), the parameters α, β (α > 0, β ≦ 1) are to stabilize the learning process and to control the impact of KL divergence and inverse KL divergence on the optimization. To make the training more stable, one can try to adjust the values of α and β.
Given a fixed generator G, max V (G, D) is calculated1,G2) Obtaining the optimal discriminator according to the formula (6) and the formula (7)
Figure BDA0002522044600000162
Figure BDA0002522044600000163
Figure BDA0002522044600000164
And (3) proving that: according to the theorem of the measure of existence, the two expectations are equal: for equation (5), when f (x) is ═ D1(x1) Or f (x) logD2(x2) Then, can obtain
Figure BDA0002522044600000165
Namely:
Figure BDA0002522044600000166
given the interval x, by
Figure BDA0002522044600000168
To obtain
Figure BDA0002522044600000169
And
Figure BDA0002522044600000167
let D1,D2Available as 0:
Figure BDA0002522044600000171
Figure BDA0002522044600000172
because of the fact that
Figure BDA0002522044600000173
This is proven to be true.
To obtain
Figure BDA0002522044600000176
Then, substituting formula (5) to train G to obtain G*. Given a
Figure BDA0002522044600000177
Nash equilibrium point (G, D) of the infinitesimal optimization problem in this generative countermeasure network1,D2) For each component, there are the forms of equation (9) and equation (10).
And (3) proving that: will be provided with
Figure BDA0002522044600000178
Substituting equation (5) yields:
Figure BDA0002522044600000174
wherein D isKL(Pdata||PG) And DKL(PG||Pdata) KL and inverse KL divergence, respectively, of the target
Figure BDA0002522044600000179
And is 0, with the remainder typically being greater than 0. Distribution generated at a generator
Figure BDA00025220446000001710
When the data distribution is completely equal, the return values of both distributions are 1, in which case neither discriminator can judge whether the sample is true or false.
In the formula (13), the error of the generator indicates that increasing the value of the parameter α can optimize the KL divergence, increasing the value of β can optimize the inverse KL divergence, and the influence of the KL divergence and the inverse KL divergence can be balanced by adjusting α and β, thereby improving the algorithm robustness.
Figure BDA0002522044600000175
In the present embodiment, the first discriminator D in the double-discriminator generating countermeasure network1Is a classical discriminator.
For the first discriminator D1The training step comprises:
s251: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s252: obtaining n noise samples from a distribution1,z2,...,zn};
S253: obtaining n generated data from a generator
Figure BDA0002522044600000181
S254: obtaining n random leakage source pictures from a database
Figure BDA0002522044600000182
S255: substituting the data collected in steps S251-S254 into formula (14) and formula (15), and adjusting the parameter thetadTo maximize it;
Figure BDA0002522044600000183
Figure BDA0002522044600000184
in the formula (14) and the formula (15),
Figure BDA0002522044600000185
the leakage source corresponding to the leakage building original image is shown, so a higher score is obtained;
Figure BDA0002522044600000186
indicating that the leakage building artwork corresponds to a leakage source generated by a generator and therefore should be a lower score;
Figure BDA0002522044600000187
the representation of the leakage building artwork corresponds to a non-corresponding random leakage source picture, so a lower score should be obtained.
In the present embodiment, the second discriminator is based on a CNN (convolutional neural network) algorithm and a BP algorithm.
CNN is a feedforward neural network that includes convolution and can perform deep structure calculation, belongs to one of deep machine learning algorithms, and is commonly used for feature extraction.
The basic steps of CNN are: inputting an image; performing convolution calculation through the convolution layer; performing feature extraction through a sampling layer; and then convolving again to extract the features, so as to circulate. After multiple cycles, the feature data is finally obtained by full-link layer classification, as shown in fig. 6.
As shown in fig. 7, the basic process of the BP algorithm is to continuously train through input data, and adjust and correct weights connected in an input layer, a hidden layer, and an output layer in the training process, so as to finally reach a minimum error value.
In this embodiment, the training of the second discriminator includes:
s261: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s261: one sample is taken from the sample set collected in step S261 (c)m,xm);
S262: c is tomTransmitting into CNN for calculation to obtain a result Om
S263: determining the output value OmWith a true target value xmA difference of (d);
s264: the weight value is adjusted by using the BP algorithm, and the adopted formula is shown as formula (16), formula (17), formula (18) and formula (19):
i=vi(1-vi)(xm i-vi) (16)
ki(17)
Figure BDA0002522044600000191
wji←wjijOm ji(19)
wherein:ierror, v, for each node in the neural networkiIs the output of the output layer, aiFor hidden layer output, wkiFor the input layer to hidden layer connection weight, μ is the learning rate constant, wjiIs the weight of i node to j node, xjiIs the value that the inode passed to the j node.
In this embodiment, the generator training step includes:
s271: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s272: obtaining n noise samples [ z ] from a distribution1,z2,...,zn};
S273: obtaining n generated data from a generator
Figure BDA0002522044600000201
S274: obtaining n random leakage source pictures from a database
Figure BDA0002522044600000202
S275: substituting the sample data collected in steps S271-S274 into formula (20) and formula (21), and adjusting the generator parameter thetagTo maximize it;
Figure BDA0002522044600000203
Figure BDA0002522044600000204
wherein,
Figure BDA0002522044600000205
represents the score obtained by passing the data generated by the generator into the first discriminator D1;
Figure BDA0002522044600000206
represents the score obtained by transmitting the data generated by the generator into the second discriminator D2;
θgparameters representing the need to be passed into the generator;
Figure BDA0002522044600000207
represents the gradient (gradient) multiplied by the Learning rate (Learning rate).
A normal distribution is one of the most important probability distributions. The concept of normal distribution was first proposed in 1733 by the German mathematicians and astronomer Moivre, but the German mathematic study was conducted by the Gaussian ratio, so the normal distribution is called Gaussian distribution.
In this embodiment, the comprehensively determining the first possible leakage source and the second possible leakage source through gaussian distribution includes:
importing the leakage source frame position and the fraction obtained by the FCN-DARG segmentation algorithm and the dual-discriminant generation countermeasure network into a result after normal distribution, and taking the leakage source position picture information with more than a specified fraction as an output result;
and performing loss calculation on the output result.
Specifically, the processing of the two resulting leakage sources and the thermographic image separation results by FCN-DARG is shown in fig. 8.
According to the normal distribution formula (22), where μ is the mean, σ is the standard deviation, and f (x) is the normal distribution function.
Figure BDA0002522044600000211
Through image segmentation and two discriminators, we obtain a plurality of leakage source pictures, and the positions of all leakage source frames and the scores form a normal distribution relation, as shown in fig. 9.
And according to the result of the normal distribution of the frame position of the leakage source and the fraction, taking the picture information of the position of the leakage source with more than a specified fraction as an output result.
Finally, the calculated result is subjected to loss calculation according to equation (23).
L(reg)(ti,t′i)=R(ti-t′i) (23)
Wherein t isi,t′iRepresenting the predicted values and the real situation of different boxes.
L(reg)Represents ti,t′iThe regression loss value of (1);
r represents smoothL1 function; where σ is 3:
Figure BDA0002522044600000212
parameters of the FCN-DARG and the dual-discriminator generation type countermeasure network are adjusted, so that the loss value is reduced, and the accuracy after comprehensive judgment is improved.
It should be noted that all of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except for mutually exclusive features and/or steps.
In addition, the above-described embodiments are exemplary, and those skilled in the art, having benefit of this disclosure, will appreciate numerous solutions that are within the scope of the disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (10)

1. A method for finding a location of a building leakage source, the method comprising:
segmenting a building thermal imaging graph by an FCN-DARG segmentation algorithm, and finding a lowest temperature point so as to obtain a first possible leakage source position;
acquiring a second possible leakage source position from the building original image of the building through the dual-discriminator generation type confrontation network;
and comprehensively judging the first possible leakage source and the second possible leakage source through Gaussian distribution to obtain the accurate leakage source position.
2. The method for finding the position of the leakage source of the building according to claim 1, wherein the building thermal imaging map is segmented by the FCN-DARG segmentation algorithm to find the lowest temperature point, so as to obtain the first possible leakage source position, comprising the following steps:
s110: acquiring the characteristics of an original infrared image of a building by using the FCN, and performing semantic prediction classification from a pixel level by convolution calculation from different stages of a convolution network to form an FCN coarse segmentation result;
s120: taking the minimum rectangular frame of the target area obtained from the FCN rough segmentation result to obtain the position of the target area;
s140: performing secondary segmentation on the original infrared image by using the minimum rectangular frame and using a self-adaptive region growing algorithm to form a secondary segmentation result;
s150: and fusing the FCN rough segmentation result and the secondary segmentation result to obtain a lowest temperature point, so as to obtain a first possible leakage source position.
3. The method for finding the location of a building leakage source according to claim 2, wherein the adaptive region growing algorithm comprises the steps of:
s141: according to the FCN rough segmentation result, taking a minimum external rectangular frame of the target segmentation result, positioning according to the rectangular frame, and extracting an image at the position of the rectangular frame on the original infrared image;
s142: taking the centroid of the rectangular frame as an initial seed point of the region growth, and defining 8-field pixel points of the seed point as an initial growth region S according to the 8-field mode0Calculating S0Pixel mean value m of0Sum dynamic difference D0And then S is0Setting the gray value of all pixels as m0
S143: iterating in step 142, each time calculating D according to formula (1)nAnd mnTo determine the threshold range omega of the new growable pixeln
Ωn=[mn-1-θDn-1,mn-1+θDn-1](1)
In formula (1), θ is an adjustment factor;
Dnfor the dynamic difference, the definition is shown in formula (2):
Figure FDA0002522044590000021
wherein x is1,x2,…,xnFor each iteration newly added pixel gray value, mnThe average gray value of all pixel points in the grown region after the nth iteration;
s144: if the region S after the nth growthnGrowth stops when no further expansion or a predetermined threshold is reached.
4. The method for finding the location of a building leakage source according to claim 2,
fusing the FCN coarse segmentation result and the secondary segmentation result to obtain a lowest temperature point, so as to obtain a first possible leakage source position, which is specifically as follows:
let FCN partition result area be SFCN(ii) a The area of the segmentation result obtained by the dynamic self-adaptive region growing is SDARGIn order to unify the gray value to facilitate image superposition, the pixels of the two are simultaneously valued as 1, two segmentation results are superposed, and finally the value of the result of the fused image I (x, y) is determined according to a formula (3):
Figure FDA0002522044590000031
5. the method for finding the position of the leakage source of the building according to claim 1, wherein the obtaining the second possible position of the leakage source from the building original image of the building through the dual-discriminator generating countermeasure network comprises the following steps:
step S210: transmitting the building original image into a generator, and generating a picture of a leakage source by the generator;
step S220: the picture of the leakage source is transmitted into a first discriminator, and the first discriminator judges the picture of the leakage source to obtain a first judgment result;
step S230: the picture of the leakage source is transmitted into a second discriminator, and the second discriminator judges the picture of the leakage source to obtain a second judgment result; wherein the first discriminator and the second discriminator parameters are not shared;
step S240: and integrally judging the first judgment result and the second judgment result to obtain the second possible leakage source position.
6. The method for finding the location of a building leakage source according to claim 5,
a generator training step is also included before the step S210;
a first discriminant training step is further included before the step S220;
a second discriminator training step is also included before the step S230.
7. The method for finding the location of a building leakage source according to claim 6, wherein the first discriminator training step comprises:
s251: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s252: obtaining n noise samples from a distribution1,z2,...,zn};
S253: obtaining n generated data from a generator
Figure FDA0002522044590000041
S254: obtaining n random leakage source pictures from a database
Figure FDA0002522044590000042
S255: substituting the data collected in steps S251-S254 into formula (14) and formula (15), and adjusting the parameter thetadTo maximize it;
Figure FDA0002522044590000043
Figure FDA0002522044590000044
in the formula (14) and the formula (15),
Figure FDA0002522044590000045
showing that the leakage building original image corresponds to the corresponding leakage source;
Figure FDA0002522044590000046
representing a leakage source generated by a generator corresponding to the leakage building original image;
Figure FDA0002522044590000047
and showing that the leakage building original image corresponds to a non-corresponding random leakage source picture.
8. The method for finding the location of a building leakage source according to claim 6, wherein the second discriminator training step comprises:
s261: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s261: one sample is taken from the sample set collected in step S261 (c)m,xm);
S262: c is tomTransmitted into CNN for calculation to obtain a result om
S263: determining the output value omWith a true target value xmA difference of (d);
s264: the weight value is adjusted by using the BP algorithm, and the adopted formula is shown as formula (16), formula (17), formula (18) and formula (19):
i=vi(1-vi)(xm i-vi) 16)
ki(17)
Figure FDA0002522044590000051
wji←wjijOm ji(19)
wherein:ierror, v, for each node in the neural networkiIs the output of the output layer, aiFor hidden layer output, wkiFor the input layer to hidden layer connection weight, μ is the learning rate constant, wjiIs the weight of i node to j node, xjiIs the value that the inode passed to the j node.
9. The method of finding the location of a building leakage source according to claim 6, wherein the generator training step comprises:
s271: collecting n leakage building original images and corresponding leakage sources, and establishing a sample set: { (c)1,x1),(c2,x2),...,(cn,xn) C is the original drawing of the leakage building, and x is the corresponding leakage source;
s272: obtaining n noise samples from a distribution1,z2,...,zn};
S273: obtaining n generated data from a generator
Figure FDA0002522044590000052
S274: obtaining n random leakage source pictures from a database
Figure FDA0002522044590000053
S275: substituting the sample data collected in steps S271-S274 into formula (20) and formula (21), and adjusting the generator parameter thetagTo maximize it;
Figure FDA0002522044590000061
Figure FDA0002522044590000062
wherein,
Figure FDA0002522044590000063
the score obtained by transmitting the data generated by the generator into the first discriminator is represented;
Figure FDA0002522044590000064
the score obtained by transmitting the data generated by the generator into the second discriminator is represented;
θgparameters representing the incoming generator;
Figure FDA0002522044590000065
representing the gradient times the learning rate.
10. The method for finding the location of a building leakage source according to claim 1,
comprehensively judging the first possible leakage source and the second possible leakage source through Gaussian distribution, wherein the comprehensive judgment comprises the following steps:
importing the leakage source frame position and the fraction obtained by the FCN-DARG segmentation algorithm and the dual-discriminant generation countermeasure network into a result after normal distribution, and taking the leakage source position picture information with more than a specified fraction as an output result;
the loss calculation is performed on the output result according to equation (23):
L(reg)(ti,t′i)=R(ti-t′i) (23)
wherein t isi,t′iRepresenting the predicted values and the real conditions of different frames;
L(reg)represents ti,t′iThe regression loss value of (1);
r represents the smoothL1 function.
CN202010497448.4A 2020-06-03 2020-06-03 Method for searching leakage source position of building Active CN111640117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010497448.4A CN111640117B (en) 2020-06-03 2020-06-03 Method for searching leakage source position of building

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010497448.4A CN111640117B (en) 2020-06-03 2020-06-03 Method for searching leakage source position of building

Publications (2)

Publication Number Publication Date
CN111640117A true CN111640117A (en) 2020-09-08
CN111640117B CN111640117B (en) 2024-03-05

Family

ID=72332113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010497448.4A Active CN111640117B (en) 2020-06-03 2020-06-03 Method for searching leakage source position of building

Country Status (1)

Country Link
CN (1) CN111640117B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117949143A (en) * 2024-03-26 2024-04-30 四川名人居门窗有限公司 Door and window leakage detection and feedback system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102636313A (en) * 2012-04-11 2012-08-15 浙江工业大学 Leakage source detecting device based on infrared thermal imaging processing
CN103939750A (en) * 2014-05-05 2014-07-23 重庆大学 Detecting identifying and positioning method for fire-fighting water pipe network leakage
CN105536079A (en) * 2010-03-31 2016-05-04 凯希特许有限公司 System and method for locating fluid leaks at a drape using sensing techniques
CN106885653A (en) * 2017-01-09 2017-06-23 珠海安维特工程检测有限公司 building roof system leakage detection method
WO2018122810A1 (en) * 2016-12-30 2018-07-05 同济大学 Method for detecting leakage of underground pipe rack based on dynamic infrared thermogram processing
CN109493317A (en) * 2018-09-25 2019-03-19 哈尔滨理工大学 The more vertebra dividing methods of 3D based on concatenated convolutional neural network
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105536079A (en) * 2010-03-31 2016-05-04 凯希特许有限公司 System and method for locating fluid leaks at a drape using sensing techniques
CN102636313A (en) * 2012-04-11 2012-08-15 浙江工业大学 Leakage source detecting device based on infrared thermal imaging processing
CN103939750A (en) * 2014-05-05 2014-07-23 重庆大学 Detecting identifying and positioning method for fire-fighting water pipe network leakage
WO2018122810A1 (en) * 2016-12-30 2018-07-05 同济大学 Method for detecting leakage of underground pipe rack based on dynamic infrared thermogram processing
CN106885653A (en) * 2017-01-09 2017-06-23 珠海安维特工程检测有限公司 building roof system leakage detection method
CN109493317A (en) * 2018-09-25 2019-03-19 哈尔滨理工大学 The more vertebra dividing methods of 3D based on concatenated convolutional neural network
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
SHUAI ZHAO等: "Deep learning–based image instance segmentation for moisture marks of shield tunnel lining", 《TUNNELLING AND UNDERGROUND SPACE TECHNOLOGY》 *
SHUAI ZHAO等: "Deep learning–based image instance segmentation for moisture marks of shield tunnel lining", 《TUNNELLING AND UNDERGROUND SPACE TECHNOLOGY》, vol. 95, 31 January 2020 (2020-01-31), pages 1 - 11 *
任志淼: "基于全卷积神经网络和动态自适应区域生长法的红外图像目标分割方法", 《半导体光电》 *
任志淼: "基于全卷积神经网络和动态自适应区域生长法的红外图像目标分割方法", 《半导体光电》, vol. 40, no. 4, 31 August 2019 (2019-08-31), pages 564 - 570 *
田旭园: "基于红外图像处理的建筑节能检测方法研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 *
田旭园: "基于红外图像处理的建筑节能检测方法研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》, no. 3, 15 March 2014 (2014-03-15), pages 038 - 71 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117949143A (en) * 2024-03-26 2024-04-30 四川名人居门窗有限公司 Door and window leakage detection and feedback system and method

Also Published As

Publication number Publication date
CN111640117B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN108596101B (en) A multi-target detection method for remote sensing images based on convolutional neural network
CN107230197B (en) Tropical cyclone objective strength determination method based on satellite cloud image and RVM
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN109615611A (en) A kind of insulator self-destruction defect inspection method based on inspection image
CN110969088A (en) Remote sensing image change detection method based on significance detection and depth twin neural network
CN108416378A (en) A kind of large scene SAR target identification methods based on deep neural network
CN114612937B (en) Pedestrian detection method based on single-mode enhancement by combining infrared light and visible light
CN111914924A (en) Rapid ship target detection method, storage medium and computing device
CN113688830B (en) Deep learning target detection method based on center point regression
CN111273378A (en) Typhoon center positioning method based on wind stress disturbance
CN113610905A (en) Deep learning remote sensing image registration method based on subimage matching and application
CN111723747A (en) Lightweight high-efficiency target detection method applied to embedded platform
CN112734683B (en) Multi-scale SAR and infrared image fusion method based on target enhancement
CN111595247B (en) Crude oil film absolute thickness inversion method based on self-expansion convolution neural network
CN111667461B (en) Abnormal target detection method for power transmission line
CN117274627A (en) Multi-temporal snow remote sensing image matching method and system based on image conversion
CN108734122A (en) A kind of EO-1 hyperion city water body detection method based on adaptive samples selection
CN111640117B (en) Method for searching leakage source position of building
CN115908276A (en) Bridge apparent damage binocular vision intelligent detection method and system integrating deep learning
CN113327271B (en) Decision-level target tracking method and system based on double-optical twin network and storage medium
CN107346549B (en) Multi-class change dynamic threshold detection method utilizing multiple features of remote sensing image
CN114913337A (en) Camouflage target frame detection method based on ternary cascade perception
CN117152601A (en) Underwater target detection method and system based on dynamic perception area routing
CN113191259B (en) Dynamic data expansion method for hyperspectral image classification and image classification method
CN116452965A (en) Underwater target detection and recognition method based on acousto-optic fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 610000 room 1, floor 2, unit 1, building 12, Hongji Yaju, No. 146, north section of Hongqi Avenue, Deyuan town (Jingrong town), Pidu District, Chengdu, Sichuan

Applicant after: SICHUAN ZHENGDA NEW MATERIAL TECHNOLOGY CO.,LTD.

Applicant after: Sichuan Zhengda future construction technology Co.,Ltd.

Address before: 610000 room 1, floor 2, unit 1, building 12, Hongji Yaju, No. 146, north section of Hongqi Avenue, Deyuan town (Jingrong town), Pidu District, Chengdu, Sichuan

Applicant before: SICHUAN ZHENGDA NEW MATERIAL TECHNOLOGY CO.,LTD.

Country or region before: China

Applicant before: BAZHONG ZHENGDA WATERPROOF AND HEAT INSULATION ENGINEERING CO.,LTD.

GR01 Patent grant
GR01 Patent grant