CN115853173A - Building curtain wall for construction and installation - Google Patents

Building curtain wall for construction and installation Download PDF

Info

Publication number
CN115853173A
CN115853173A CN202211625374.3A CN202211625374A CN115853173A CN 115853173 A CN115853173 A CN 115853173A CN 202211625374 A CN202211625374 A CN 202211625374A CN 115853173 A CN115853173 A CN 115853173A
Authority
CN
China
Prior art keywords
time
frequency
training
graph
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211625374.3A
Other languages
Chinese (zh)
Inventor
吴国尧
杨云
姜鹤初
俞溢栋
黄国弟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202211625374.3A priority Critical patent/CN115853173A/en
Publication of CN115853173A publication Critical patent/CN115853173A/en
Pending legal-status Critical Current

Links

Images

Abstract

The utility model relates to a building curtain technical field, it specifically discloses a building curtain for construction installation, and it gathers sound detection signal as input data through the sound detector who deploys in glass curtain, then, adopts artificial intelligence detection technique based on degree of depth study right sound detection signal is handled in order to extract sound detection signal is at the multiscale hidden characteristic information under the different transform domains to at this in-process, still utilized passageway attention and space attention to strengthen glass curtain's passageway hidden characteristic and space hidden characteristic, in order to do benefit to the improvement to the precision of glass curtain defect judgement, through such a mode, can make glass curtain among the building curtain carry out the defect self-checking, in order to produce the early warning when detecting out to have the defect, and then avoid the emergence of accident, guarantee the security that building curtain used.

Description

Building curtain wall for construction and installation
Technical Field
The application relates to the technical field of building curtain walls, in particular to a building curtain wall for construction and installation.
Background
The glass curtain wall has the advantages of lightness, attractiveness, difficulty in pollution, energy conservation, environmental friendliness and the like, and is widely applied to the exterior wall enclosure of a high-rise building, the potential safety hazard problem of the glass curtain wall in use is more and more concerned by people, the debonding and falling of glass on the curtain wall and the self breakage of the glass (including the unfilled corner of the glass and the cracks of the glass) are common defect forms of the glass curtain wall, the debonding and falling of the curtain wall glass often cause the phenomena of color change, bubbling, cracking, debonding and the like due to the fact that used structural adhesive is influenced by factors such as external environment, and the service life of the structural adhesive is greatly shortened, and the bonding performance is reduced. Therefore, a building curtain wall with a defect self-checking function is expected.
At present, deep learning and neural networks have been widely applied in the fields of computer vision, natural language processing, speech signal processing, and the like. In addition, deep learning and neural networks also exhibit a level close to or even exceeding that of humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
In recent years, deep learning and development of neural networks provide new solutions for defect detection of glass curtain walls.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a building curtain wall for construction installation, it gathers sound detection signal as input data through the sound detector who deploys in glass curtain wall, then adopts artificial intelligence detection technique based on degree of depth study to right sound detection signal handles in order to extract sound detection signal is at the multiscale hidden characteristic information under the different transform domains to at this in-process, still utilized passageway attention and space attention to reinforce glass curtain wall's passageway hidden characteristic and space hidden characteristic, in order to do benefit to the improvement to the precision of glass curtain wall defect judgement, through such a way, can make glass curtain wall among the building curtain wall can carry out the defect self-checking, in order to produce the early warning when detecting out to have the defect, and then avoid the emergence of accident, guarantee the security that building curtain wall used.
According to one aspect of the present application, there is provided an architectural curtain wall for construction installation, comprising:
the sound detection unit is used for acquiring sound detection signals collected by a sound detector arranged in the glass curtain wall;
the time-frequency conversion unit is used for calculating a time-domain enhancement graph, a SIFT transformation time-frequency graph and an S transformation time-frequency graph of the sound detection signal;
the time-frequency diagram channel aggregation unit is used for aggregating the time-domain enhancement diagram, the SIFT transform time-frequency diagram and the S transform time-frequency diagram according to channel dimensions to obtain a multi-channel time-frequency diagram;
the time-frequency graph feature extraction unit is used for enabling the multi-channel time-frequency graph to pass through a convolutional neural network model comprising a plurality of mixed convolutional layers and a parallel weight distribution module to obtain a time-frequency feature graph; and the defect self-inspection result generating unit is used for enabling the time-frequency characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the glass curtain wall has defects or not.
Compared with the prior art, the building curtain wall for construction and installation provided by the application has the advantages that the sound detection signals are collected as input data through the sound detector arranged in the glass curtain wall, then the artificial intelligence detection technology based on deep learning is adopted to process the sound detection signals to extract multi-scale implicit characteristic information of the sound detection signals under different transform domains, in the process, the channel implicit characteristic and the space implicit characteristic of the glass curtain wall are strengthened by means of channel attention and space attention, the accuracy of judging the defects of the glass curtain wall is improved, and through the mode, the glass curtain wall in the building curtain wall can be subjected to defect self-checking, early warning is generated when the defects exist, accidents are avoided, and the use safety of the building curtain wall is guaranteed.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates an application scenario of a building curtain wall for construction installation according to an embodiment of the application.
FIG. 2 illustrates a block diagram schematic view of an architectural curtain wall for construction installation according to an embodiment of the present application.
Fig. 3 illustrates a block diagram of a time-frequency graph feature extraction unit in a building curtain wall for construction installation according to an embodiment of the present application.
FIG. 4 illustrates a block diagram of a multi-scale convolutional encoding subunit in a building curtain wall for construction installation according to an embodiment of the present application.
FIG. 5 illustrates a block diagram of a multi-dimensional feature aggregation sub-unit in a building curtain wall for construction installation according to an embodiment of the application.
FIG. 6 illustrates a block diagram of a tunnel attention secondary subunit in a building curtain wall for construction installation according to an embodiment of the application.
FIG. 7 illustrates a block diagram of a spatial attention secondary subunit in an architectural curtain wall for construction installation according to an embodiment of the present application.
FIG. 8 illustrates a block diagram of a training module in an architectural curtain wall for construction installation according to an embodiment of the application.
FIG. 9 illustrates a flow chart of a self-inspection method for a construction installed building curtain wall according to an embodiment of the present application.
Fig. 10 illustrates a schematic diagram of a system architecture for a self-inspection method of a construction installed building curtain wall according to an embodiment of the present application.
Fig. 11 illustrates a flowchart of a training phase of training the convolutional neural network model and the classifier in the self-inspection method of the building curtain wall for construction installation according to the embodiment of the application.
Fig. 12 is a schematic diagram illustrating a system architecture of a training stage for training the convolutional neural network model and the classifier in the self-inspection method for the building curtain wall for construction installation according to the embodiment of the application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Overview of scenes
The glass curtain wall has the advantages of lightness, attractiveness, difficulty in pollution, energy conservation, environmental friendliness and the like, and is widely applied to the exterior wall enclosure of a high-rise building, the potential safety hazard problem of the glass curtain wall in use is more and more concerned by people, the debonding and falling of glass on the curtain wall and the self breakage of the glass (including the unfilled corner of the glass and the cracks of the glass) are common defect forms of the glass curtain wall, the debonding and falling of the curtain wall glass often cause the phenomena of color change, bubbling, cracking, debonding and the like due to the fact that used structural adhesive is influenced by factors such as external environment, and the service life of the structural adhesive is greatly shortened, and the bonding performance is reduced. Therefore, a building curtain wall with a defect self-checking function is expected.
At present, deep learning and neural networks have been widely applied in the fields of computer vision, natural language processing, speech signal processing, and the like. In addition, deep learning and neural networks also exhibit a level close to or even exceeding that of humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
In recent years, deep learning and development of neural networks provide new solutions for defect detection of glass curtain walls.
Accordingly, considering that when the glass curtain wall has a defect, the sound signal generated by the glass curtain wall in a knocked state changes, whether the glass curtain wall has the defect can be judged through the sound detection signal collected by the sound sensor arranged in the glass curtain wall. Specifically, in the technical scheme of the application, an artificial intelligence detection technology based on deep learning is adopted, sound detection signals are collected as input data through a sound detector arranged in a glass curtain wall, so that multi-scale implicit characteristic information of the sound detection signals in different transform domains is extracted, in the process, channel attention is utilized to reflect weight of space dimension characteristic difference, and space attention is utilized to restrain or reinforce characteristics of different space positions, so that accuracy of defect judgment of the glass curtain wall is improved. Like this, can make the glass curtain wall among the building curtain wall can carry out the defect self-checking to produce the early warning when detecting out to have the defect, and then avoid the emergence of accident, guarantee the security that building curtain wall used.
Specifically, in the technical scheme of this application, first, gather the sound detection signal through the sound detector who disposes in the glass curtain wall. Then, considering that noise is introduced due to the performance problem of the sound detector, on the one hand, the sound detection signal is a weak signal, which brings technical difficulty to defect judgment of the glass curtain wall. Therefore, in the solution of the present application, different types of domain analysis need to be performed on the sound detection signal at the input.
It should be appreciated that, considering that the sound detection signal exhibits different patterns in different domains, performing different types of domain analysis on the sound detection signal essentially maps the sound detection signal into different domains for data enhancement at the input end and enhancing the diversity of the input end data. Specifically, a time domain enhancement map, a SIFT transform time-frequency map and an S transform time-frequency map of the sound detection signal are calculated. Particularly, here, the time domain enhancement map can avoid the problem that the signal is weak due to the interference of external environment noise and the noise of the instrument itself in the sound signal acquisition process. The SI FT transform is a scale-invariant feature transform, which uses a windowing method in the time dimension, and takes a Fourier transform for a small segment of the signal, so that the corresponding frequency spectrum corresponds to a specific time period of the signal, and has very strong robustness. Furthermore, since the SI FT feature is a local feature of an image, it maintains the invariance to rotation, scaling, and brightness change, and also maintains a certain degree of stability to view angle change, affine transformation, and noise. The S transformation can provide a wide window in a low frequency band and a narrow window in a high frequency band, so that the characteristics of the sound detection signal in each frequency band can be reserved to the maximum extent, and the accuracy of subsequent classification is improved.
Then, the time-domain enhancement map, the SIFT transform time-frequency map and the S transform time-frequency map may be aggregated according to channel dimensions to obtain a multi-channel time-frequency map having a plurality of different types of domain analysis information. Further, a convolutional neural network model with excellent performance in terms of implicit feature extraction is used for feature mining on the multichannel time-frequency graph, and particularly, in the technical scheme of the application, the convolutional neural network model comprising a plurality of hybrid convolutional layers and a parallel weight distribution module is used for feature extraction on the multichannel time-frequency graph, so that the multi-scale perception characteristics of the hybrid convolutional layers are combined with the feature extraction capability of the parallel weight distribution module to improve the expression of features, and the time-frequency feature graph is obtained. Specifically, the multiple hybrid convolutional layers may be encoded, and the feature map obtained through encoding may be input to the parallel weight assignment module to obtain the time-frequency feature map.
Specifically, the convolutional neural network model uses a plurality of hybrid convolutional kernel layers to process the multi-channel time-frequency graph so as to extract multi-scale implicit features of the multi-channel time-frequency graph, thereby obtaining a depth time-frequency feature graph. Correspondingly, in a specific example of the present application, in the hybrid convolution layer (MCL), the design of the module includes four branches connected in parallel, and the module is composed of a common convolution layer with a convolution kernel size of 3 × 3 and three hole convolution layers with a convolution kernel size of 3 × 3, and the multichannel time-frequency diagram is respectively operated, the expansion rates of the three branches of the hole convolution are respectively set to 2, 3, and 4, image information of different receptive fields can be obtained by setting different expansion rates, so that feature maps of different scales can be obtained, the receptive field is expanded, down-sampling loss information is avoided, and then 4 branch feature maps are fused, so that the module is more dense in sampling, has high-level features, and does not increase additional parameters.
It should be understood that the debonding and falling of glass on a curtain wall and the breaking of the glass itself (including the unfilled corners of the glass and the cracks of the glass) are common defect forms of the glass curtain wall, and the debonding and falling of the glass of the curtain wall often causes phenomena such as discoloration, bubbling, cracking, debonding and the like due to the influence of factors such as external environment on the used structural adhesive. Therefore, when the glass curtain wall is subjected to defect detection, hidden characteristic information on the spatial position and the channel dimension of the glass curtain wall is focused on, and useless interference characteristics irrelevant to the defect detection of the glass curtain wall are ignored. Therefore, in the technical scheme of the application, aiming at the problem of low target detection precision caused by edge blurring in the depth time-frequency feature map, a parallel weight distribution module is used for carrying out feature enhancement on the depth time-frequency feature map. Specifically, the depth time-frequency feature map is input to a parallel weight distribution module of the convolutional neural network model to obtain the time-frequency feature map, so that effective feature representation can be enhanced, useless feature information can be inhibited, and accuracy of subsequent classification can be improved. In particular, the parallel weight assignment module includes a spatial attention branch and a channel attention branch in parallel, that is, the parallel weight assignment module performs feature enhancement on the depth time-frequency feature map by using the spatial attention branch and the channel attention branch respectively, the image features extracted by the channel attention reflect the correlation and importance among feature channels, and the image features extracted by the spatial attention reflect the weight of feature difference in spatial dimension, so as to suppress or enhance features at different spatial positions.
Further, after the time-frequency characteristic diagram is obtained, the time-frequency characteristic diagram is processed through a classifier to obtain a classification result, and the classification result is used for indicating whether the glass curtain wall has defects or not. Namely, the time-frequency feature map is taken as a classification feature map to carry out classification processing in a classifier so as to generate a classification result for representing whether the glass curtain wall has defects or not. Like this, can make glass curtain wall carry out defect self-checking to can improve glass curtain wall defect detection's accuracy.
Particularly, in the technical solution of the present application, the parallel weight assignment module can reflect the correlation and importance among feature channels through a channel attention map F3, and reflect the weight of the feature difference of the spatial dimension through a spatial attention map F2, so as to suppress or strengthen the features of different spatial positions, thereby enhancing the feature expression effect of the time-frequency feature map. However, when the spatial attention diagram F2 and the channel attention diagram F3 are fused in a point-and-add manner, since the spatial attention diagram F2 and the channel attention diagram F3 are based on channel attention mechanisms in different dimensions, the correlation therebetween may not be high, so that when the classification is performed by a classifier, the adaptation burden of the weight matrix of the classifier with respect to the time-frequency feature map obtained by fusing the spatial attention diagram F2 and the channel attention diagram F3 may be relatively heavy, which may affect the training speed of the classifier and the accuracy of the classification result of the classification feature vector.
Based on this, in this case, in the technical solution of the present application, the training of the classifier is performed by using iterative scene-dependent optimization of the classifier, which specifically includes:
Figure BDA0004004123550000061
v is a classification feature vector obtained after the time-frequency feature map is unfolded, M 1 And M 2 Is the weight matrix of the classifier before and after each iteration update, | · | | survival 0 Representing the zero norm of the vector.
That is, the measure of the scene point correlation before and after the parameters of the weight matrix are updated during the iteration of the classifier is used as a correction factor to optimize the class probability representation of the classification feature vector V, so as to make support through the distribution similarity of the classification scene of the classifier to perform correlation description on the classification feature vector V, so as to improve the adaptability between the parameters of the weight matrix of the classifier and the classification feature vector V in the direction of the classification feature vector V, thereby improving the training speed of the classifier and the accuracy of the classification result. Like this, can make the glass curtain wall among the building curtain wall can carry out the defect self-checking to produce the early warning when detecting out to have the defect, and then avoid the emergence of accident, guarantee the security that building curtain wall used.
Based on this, the present application provides a building curtain wall for construction installation, it includes: the sound detection unit is used for acquiring sound detection signals collected by a sound detector arranged in the glass curtain wall; the time-frequency conversion unit is used for calculating a time-domain enhancement graph, a SIFT transformation time-frequency graph and an S transformation time-frequency graph of the sound detection signal; the time-frequency graph channel aggregation unit is used for aggregating the time-domain enhancement graph, the SIFT transformation time-frequency graph and the S transformation time-frequency graph according to channel dimensions to obtain a multi-channel time-frequency graph; the time-frequency graph feature extraction unit is used for enabling the multi-channel time-frequency graph to pass through a convolutional neural network model comprising a plurality of mixed convolutional layers and a parallel weight distribution module to obtain a time-frequency feature graph; and the defect self-checking result generating unit is used for enabling the time-frequency characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the glass curtain wall has defects or not.
FIG. 1 illustrates an application scenario of a building curtain wall for construction installation according to an embodiment of the application. As shown in fig. 1, in this application scenario, a detection sound wave is emitted by a sound wave generator (e.g., G illustrated in fig. 1) disposed outside a glass curtain wall (e.g., C illustrated in fig. 1), and a sound detection signal is collected by a sound detector (e.g., D illustrated in fig. 1) disposed inside the glass curtain wall. Then, the collected sound detection signal is input into a server (for example, S illustrated in fig. 1) deployed with a self-checking algorithm of the building curtain wall for construction installation, wherein the server can process the sound detection signal by using the self-checking algorithm of the building curtain wall for construction installation to generate a classification result for indicating whether the glass curtain wall has defects.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
FIG. 2 illustrates a block diagram schematic view of an architectural curtain wall for construction installation according to an embodiment of the present application. As shown in fig. 2, the building curtain wall 100 for construction installation according to the embodiment of the present application includes: a sound detection unit 110 for acquiring a sound detection signal collected by a sound detector disposed in the glass curtain wall; a time-frequency conversion unit 120, configured to calculate a time-domain enhancement map, an S IFT transform time-frequency map, and an S transform time-frequency map of the sound detection signal; a time-frequency diagram channel aggregation unit 130, configured to aggregate the time-domain enhancement diagram, the S IFT transform time-frequency diagram, and the S transform time-frequency diagram according to a channel dimension to obtain a multi-channel time-frequency diagram; the time-frequency graph feature extraction unit 140 is configured to pass the multi-channel time-frequency graph through a convolutional neural network model including a plurality of hybrid convolutional layers and a parallel weight distribution module to obtain a time-frequency feature graph; and the defect self-inspection result generating unit 150 is used for enabling the time-frequency characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the glass curtain wall has defects or not.
In the embodiment of the present application, the sound detection unit 110 is configured to obtain a sound detection signal collected by a sound detector disposed in the glass curtain wall. It should be understood that, considering that when the glass curtain wall is defective, the detection sound wave emitted by the sound wave generator changes characteristically through the glass curtain wall, therefore, whether the glass curtain wall is defective or not can be judged by the sound detection signal collected by the sound sensor disposed in the glass curtain wall. Specifically, in the technical scheme of the application, an artificial intelligence detection technology based on deep learning is adopted, sound detection signals are collected as input data through a sound detector arranged in a glass curtain wall, so that multi-scale implicit characteristic information of the sound detection signals in different transform domains is extracted, in the process, channel attention is utilized to reflect weight of space dimension characteristic difference, and space attention is utilized to restrain or reinforce characteristics of different space positions, so that accuracy of defect judgment of the glass curtain wall is improved. Like this, can make the glass curtain wall among the building curtain wall can carry out the defect self-checking to produce the early warning when detecting out to have the defect, and then avoid the emergence of accident, guarantee the security that building curtain wall used.
Specifically, in the technical scheme of this application, through the sound generator transmission detection sound wave of arranging outside the glass curtain wall to gather the sound detection signal through the sound detector of arranging in the glass curtain wall. Then, considering that noise is introduced due to the performance problem of the sound detector, on the one hand, the sound detection signal is a weak signal, which brings technical difficulty to defect judgment of the glass curtain wall. Therefore, in the solution of the present application, it is also necessary to perform different types of domain analysis on the sound detection signal at the input.
In this embodiment of the application, the time-frequency conversion unit 120 is configured to calculate a time-domain enhancement map, a SIFT transform time-frequency map, and an S transform time-frequency map of the sound detection signal. It should be appreciated that, considering that the sound detection signal exhibits different patterns in different domains, performing different types of domain analysis on the sound detection signal essentially maps the sound detection signal into different domains for data enhancement at the input end and enhancing the diversity of the input end data. Specifically, a time domain enhancement map, a SIFT transform time-frequency map and an S transform time-frequency map of the sound detection signal are calculated. In particular, here, the time-domain enhancement map is a two-dimensional image constructed by re-sampling the signal width and modifying the signal height according to the maximum-minimum relationship of the signal to enhance the signal, and using pure black as the ground color, and the enhanced signal is white, on the basis of the original signal. The problem that signals are weak due to interference of external environment noise and self noise of an instrument in the sound signal acquisition process can be avoided. The sft transform is a scale invariant feature transform that uses a windowing method in the time dimension, and takes a fourier transform on small segments of the signal, so that the corresponding spectrum corresponds to a particular time segment of the signal, which has very strong robustness. Furthermore, since the SI FT feature is a local feature of an image, it maintains the invariance to rotation, scaling, and brightness change, and also maintains a certain degree of stability to view angle change, affine transformation, and noise. The S transformation can provide a wide window in a low frequency band and a narrow window in a high frequency band, so that the characteristics of the sound detection signal in each frequency band can be reserved to the maximum extent, and the accuracy of subsequent classification is improved.
In a specific embodiment of the present application, the time-frequency conversion unit 120 is further configured to: carrying out S transformation on the sound detection signal by the following formula to obtain an S transformation time-frequency diagram;
wherein the formula is:
Figure BDA0004004123550000091
wherein S (f, τ) represents the S-transform time-frequency diagram, τ is a time shift factor, x (t) represents the sound detection signal, f represents frequency, and t represents time.
In this embodiment of the present application, the time-frequency diagram channel aggregation unit 130 is configured to aggregate the time-domain enhancement diagram, the SI FT transform time-frequency diagram, and the S transform time-frequency diagram according to channel dimensions to obtain a multi-channel time-frequency diagram. Namely, the time domain enhancement features, SIFT transformation time frequency features and S transformation time frequency features under different types of domains are subjected to lossless fusion to obtain a multi-channel time frequency image with a plurality of different types of domain analysis information.
In this embodiment of the application, the time-frequency graph feature extraction unit 140 is configured to obtain a time-frequency feature graph from the multi-channel time-frequency graph through a convolutional neural network model including a plurality of hybrid convolutional layers and a parallel weight assignment module. Further, a convolutional neural network model with excellent performance in terms of implicit feature extraction is used for feature mining on the multichannel time-frequency graph, and particularly, in the technical scheme of the application, the convolutional neural network model comprising a plurality of hybrid convolutional layers and a parallel weight distribution module is used for feature extraction on the multichannel time-frequency graph, so that the multi-scale perception characteristics of the hybrid convolutional layers are combined with the feature extraction capability of the parallel weight distribution module to improve the expression of features, and the time-frequency feature graph is obtained. Specifically, the multiple hybrid convolutional layers may be encoded, and the feature map obtained through encoding may be input to the parallel weight assignment module to obtain the time-frequency feature map.
In a specific embodiment of the present application, the time-frequency diagram feature extraction unit 140 includes: a multi-scale convolutional coding subunit 141 and a multi-dimensional feature aggregation subunit 142. Wherein the multi-scale convolutional coding subunit 141 is configured to input the multi-channel time-frequency map into a plurality of hybrid convolutional layers of the convolutional neural network model to output a depth time-frequency feature map from a last hybrid convolutional layer of the plurality of hybrid convolutional layers; and the multidimensional feature aggregation subunit 142 is configured to input the depth time-frequency feature map into a parallel weight assignment module of the convolutional neural network model to obtain the time-frequency feature map, where the parallel weight assignment module includes a spatial attention branch and a channel attention branch that are parallel to each other.
In a specific embodiment of the present application, the multi-scale convolutional encoding subunit 141 includes: a first scale convolution secondary subunit 1411, configured to perform convolution encoding on the multichannel time-frequency graph by using a first convolution kernel with a first size to obtain a first scale feature map; a second scale convolution secondary subunit 1412, configured to perform convolution encoding on the multi-channel time-frequency graph by using a second convolution kernel with the first void rate to obtain a second scale feature map; a third scale convolution secondary subunit 1413, configured to perform convolution encoding on the multi-channel time-frequency graph by using a third convolution kernel with a second void rate to obtain a third scale feature map; a fourth scale convolution secondary subunit 1414, configured to perform convolution encoding on the multichannel time-frequency graph by using a fourth convolution kernel with a third void rate to obtain a fourth scale feature map, where the first convolution kernel, the second convolution kernel, the third convolution kernel, and the fourth convolution kernel have the same size, and the second convolution kernel, the third convolution kernel, and the fourth convolution kernel have different void rates; and a multi-scale aggregation secondary subunit 1415, configured to aggregate the first scale feature map, the second scale feature map, the third scale feature map, and the fourth scale feature map along a channel dimension to obtain an aggregated feature map.
Specifically, the convolutional neural network model uses a plurality of mixed convolutional kernel layers to process the multi-channel time-frequency graph so as to extract multi-scale implicit features of the multi-channel time-frequency graph, thereby obtaining a depth time-frequency feature graph. Correspondingly, in a specific example of the present application, in the hybrid convolution layer (MCL), the design of the module includes four branches connected in parallel, and the module is composed of a common convolution layer with a convolution kernel size of 3 × 3 and three hole convolution layers with a convolution kernel size of 3 × 3, and the multichannel time-frequency diagram is respectively operated, the expansion rates of the three branches of the hole convolution are respectively set to 2, 3, and 4, image information of different receptive fields can be obtained by setting different expansion rates, so that feature maps of different scales can be obtained, the receptive field is expanded, down-sampling loss information is avoided, and then 4 branch feature maps are fused, so that the module is more dense in sampling, has high-level features, and does not increase additional parameters.
In a specific embodiment of the present application, the multi-dimensional feature aggregation subunit 142 includes: a channel attention secondary subunit 1421, a spatial attention secondary subunit 1422, and a multi-dimensional attention aggregation secondary subunit 1423. The channel attention secondary subunit 1421 is configured to input the depth time-frequency feature map into a channel attention branch of the parallel weight assignment module to obtain a channel attention feature map; the spatial attention secondary subunit 1422 is configured to input the depth time-frequency feature map into a spatial attention branch of the parallel weight assignment module to obtain a spatial attention feature map; and the multidimensional attention clustering secondary subunit 1423 is configured to add the channel attention feature map and the spatial attention feature map by location point to obtain the time-frequency feature map.
In a specific embodiment of the present application, the channel attention secondary subunit 1421 includes: a global mean pooling tertiary sub-unit 14211, a normalized tertiary sub-unit 14212, and a channel attention applying tertiary sub-unit 14213. The global mean pooling three-stage subunit 14211 is configured to perform global mean pooling along a channel dimension on the depth time-frequency feature map to obtain a channel feature vector; the normalized tertiary subunit 14212 is configured to pass the channel feature vector through a Softmax function to obtain a normalized channel feature vector; and the channel attention applying third-level subunit 14213 is configured to weight the feature matrix of the depth time-frequency feature map along the channel dimension by using the feature value of each position in the normalized channel feature vector as a weight to obtain a channel attention feature map.
In a specific embodiment of the present application, the spatial attention secondary subunit 1422 includes: a convolutional encoding three-level subunit 14221, a probabilistic three-level subunit 14222, and a spatial attention applying three-level subunit 14223. The convolutional encoding three-level subunit 14221 is configured to perform convolutional encoding on the depth time-frequency feature map by using convolutional layers of the spatial attention branch of the parallel weight assignment module to obtain a convolutional feature map; the probabilistic three-stage subunit 14222 is configured to pass the spatial attention map through a Softmax function to obtain a spatial attention score map; and the spatial attention applying tertiary subunit 14223 is configured to multiply the spatial attention score map and the depth time-frequency feature map by a position point to obtain a spatial attention feature map.
It should be understood that the debonding and falling of glass on a curtain wall and the breakage of the glass (including the unfilled corners of the glass and the cracks of the glass) are common defect forms of the glass curtain wall, and the debonding and falling of the glass of the curtain wall are often caused by the phenomena of discoloration, bubbling, cracking, debonding and the like due to the use of structural adhesive under the influence of factors such as external environment and the like. Therefore, when the glass curtain wall is subjected to defect detection, hidden characteristic information on the spatial position and the channel dimension of the glass curtain wall is focused on, and useless interference characteristics irrelevant to the defect detection of the glass curtain wall are omitted. Therefore, in the technical scheme of the application, aiming at the problem of low target detection precision caused by edge blurring in the depth time-frequency feature map, a parallel weight distribution module is used for carrying out feature enhancement on the depth time-frequency feature map. Specifically, the depth time-frequency feature map is input to a parallel weight distribution module of the convolutional neural network model to obtain the time-frequency feature map, so that effective feature representation can be enhanced, useless feature information can be inhibited, and accuracy of subsequent classification can be improved. In particular, here, the parallel weight assignment module includes a spatial attention branch and a channel attention branch in parallel, that is, the parallel weight assignment module uses the spatial attention branch and the channel attention branch to perform feature enhancement on features related to glass curtain wall defect detection in the depth time-frequency feature map, respectively, the feature vectors extracted by the channel attention reflect the correlation and importance among feature channels, and the image features extracted by the spatial attention reflect the weight of feature differences in spatial dimensions, so as to suppress or enhance features in different spatial positions.
In this embodiment of the application, the defect self-inspection result generating unit 150 is configured to pass the time-frequency characteristic diagram through a classifier to obtain a classification result, where the classification result is used to indicate whether a defect exists in the glass curtain wall. Namely, the time-frequency feature map is taken as a classification feature map to be classified in a classifier so as to generate a classification result for representing whether the glass curtain wall has defects or not. Like this, can make glass curtain wall carry out defect self-checking to can improve glass curtain wall defect detection's accuracy.
In a specific embodiment of the present application, the defect self-inspection result generating unit is further configured to: processing the time-frequency characteristic graph by using the classifier according to the following formula to obtain a first classification result;
wherein the formula is: o = softmax { (W) c ,B c ) L Project F), where Project (F) represents the projection of the time-frequency feature map as a vector, W c As a weight matrix, B c Representing a bias vector.
That is, the classifier first uses a full-link layer to perform full-link coding on the time-frequency feature map so as to fully utilize information of each position in the time-frequency feature map to reduce the dimension of the time-frequency feature map into a one-dimensional classification feature vector; then, the Softmax function value of the one-dimensional classification feature vector, that is, the probability value that the classification feature vector belongs to each classification label, which in this embodiment includes the defect of the glass curtain wall (first label) and the defect of the glass curtain wall (second label), is calculated. And finally, taking the label corresponding to the larger probability value as the classification result.
In a specific embodiment of the present application, the system further comprises a training module 200 for training the convolutional neural network model and the classifier;
wherein the training module 200 comprises: a training data obtaining unit 210, configured to obtain training data, where the training data includes a training sound detection signal and a true value of whether the glass curtain wall has a defect; a training time-frequency conversion unit 220, configured to calculate a training time-domain enhancement diagram, a training SIFT transform time-frequency diagram, and a training S transform time-frequency diagram of the training sound detection signal; a training time-frequency diagram channel aggregation unit 230, configured to aggregate the training time-domain enhancement diagram, the training SI FT transform time-frequency diagram, and the training S transform time-frequency diagram according to channel dimensions to obtain a training multi-channel time-frequency diagram; a training time-frequency graph feature extraction unit 240, configured to pass the training multichannel time-frequency graph through the convolutional neural network model including the multiple mixed convolutional layers and the parallel weight distribution module to obtain a training time-frequency feature graph; a classification loss unit 250, configured to pass the training time-frequency feature map through the classifier to obtain a classification loss function value; and a training unit 260, configured to train the convolutional neural network model and the classifier in a gradient-descent back propagation manner based on the classification loss function value, where in each iteration of the training, a training time-frequency feature vector obtained by expanding the training time-frequency feature map is iterated based on a weight matrix of the classifier before and after each iteration update.
In this embodiment, the training data obtaining unit 210, the training time-frequency transforming unit 220, the training time-frequency graph channel aggregating unit 230, and the training time-frequency graph feature extracting unit 240 are configured to obtain training data, where the training data includes a training sound detection signal, and a true value of whether the glass curtain wall has a defect; then, a training time domain enhancement graph, a training IFT transformation time-frequency graph and a training S transformation time-frequency graph of the training sound detection signal are calculated. And then, aggregating the training time domain enhancement graph, the training SI FT transformation time-frequency graph and the training S transformation time-frequency graph according to channel dimensions to obtain a training multi-channel time-frequency graph, and enabling the training multi-channel time-frequency graph to pass through the convolutional neural network model comprising the plurality of mixed convolutional layers and the parallel weight distribution module to obtain a training time-frequency characteristic graph.
More specifically, in an embodiment of the present application, the classification loss unit 250 is configured to pass the training time-frequency feature map through the classifier to obtain a classification loss function value. Namely, the training time-frequency characteristic diagram is passed through the classifier to obtain a classification result, the classification result is compared with a real value of whether the glass curtain wall has defects or not, and a cross entropy value is calculated to serve as the classification loss function value.
More specifically, in an embodiment of the present application, the training unit 260 is configured to train the convolutional neural network model and the classifier with back propagation of gradient descent based on the classification loss function values, wherein in each iteration of the training, a training time-frequency feature vector developed from the training time-frequency feature map is iterated based on a weight matrix of the classifier before and after updating in each iteration. It can be understood that the trained convolutional neural network model can extract features which are beneficial to judging whether the glass curtain wall has defects or not. The classifier after training is more accurate to whether there is the defect judgement in the glass curtain wall.
Particularly, in the technical solution of the present application, the parallel weight assignment module can reflect the correlation and importance among feature channels through a channel attention map F3, and reflect the weight of the feature difference of the spatial dimension through a spatial attention map F2, so as to suppress or strengthen the features of different spatial positions, thereby enhancing the feature expression effect of the time-frequency feature map. However, when the spatial attention diagram F2 and the channel attention diagram F3 are fused in a point-and-add manner, since the spatial attention diagram F2 and the channel attention diagram F3 are based on channel attention mechanisms in different dimensions, the correlation therebetween may not be high, so that when performing classification by a classifier, the adaptation burden of the weight matrix of the classifier with respect to the time-frequency feature map obtained by fusing the spatial attention diagram F2 and the channel attention diagram F3 may be relatively heavy, which may affect the training speed of the classifier and the accuracy of the classification result of the classification feature vector. Based on this, in this case, the technical solution of the present application adopts iterative scene-dependent optimization of the classifier to train the classifier.
In a specific embodiment of the present application, in each iteration of the training, a training time-frequency feature vector obtained by developing the training time-frequency feature map is iterated based on a weight matrix of the classifier before and after each iteration update according to the following formula, where the formula is:
Figure BDA0004004123550000141
wherein V represents the training time frequencyThe training time-frequency characteristic vector M obtained by characteristic diagram expansion 1 And M 2 Respectively representing the weight matrix of the classifier before and after each iteration update, | · | | calving 0 Which represents the zero norm of the vector,
Figure BDA0004004123550000142
indicating a position-wise addition, <' > or>
Figure BDA0004004123550000143
Represents subtraction by position, and->
Figure BDA0004004123550000144
Representing matrix multiplication and exp (-) representing exponential operation.
That is, the measure of the scene point correlation before and after the parameters of the weight matrix are updated during the iteration of the classifier is used as a correction factor to optimize the class probability representation of the classification feature vector V, so as to make support through the distribution similarity of the classification scene of the classifier to perform correlation description on the classification feature vector V, so as to improve the adaptability between the parameters of the weight matrix of the classifier and the classification feature vector V in the direction of the classification feature vector V, thereby improving the training speed of the classifier and the accuracy of the classification result. Like this, can make the glass curtain wall among the building curtain wall can carry out the defect self-checking to produce the early warning when detecting out to have the defect, and then avoid the emergence of accident, guarantee the security that building curtain wall used.
In summary, based on the building curtain wall for construction and installation in the embodiment of the application, sound detection signals are collected as input data through a sound detector arranged in the glass curtain wall, then, an artificial intelligence detection technology based on deep learning is adopted to process the sound detection signals so as to extract multi-scale implicit characteristic information of the sound detection signals in different transform domains, and in the process, the channel implicit characteristic and the spatial implicit characteristic of the glass curtain wall are strengthened by utilizing channel attention and spatial attention so as to be beneficial to improving the precision of judging the defects of the glass curtain wall.
Exemplary method
FIG. 9 illustrates a flow chart of a self-inspection method for a construction installed building curtain wall according to an embodiment of the present application. As shown in fig. 9, the self-inspection method for the construction-installed building curtain wall according to the embodiment of the application comprises the following steps: s110, acquiring a sound detection signal collected by a sound detector arranged in the glass curtain wall; s120, calculating a time domain enhancement graph, a SIFT transformation time-frequency graph and an S transformation time-frequency graph of the sound detection signal; s130, aggregating the time domain enhancement graph, the SIFT transform time-frequency graph and the S transform time-frequency graph according to channel dimensions to obtain a multi-channel time-frequency graph; s140, passing the multi-channel time-frequency graph through a convolutional neural network model comprising a plurality of mixed convolutional layers and a parallel weight distribution module to obtain a time-frequency characteristic graph; and S150, passing the time-frequency characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for representing whether the glass curtain wall has defects or not.
Fig. 10 illustrates a schematic diagram of a system architecture for a self-inspection method of a construction-installed building curtain wall according to an embodiment of the present application. In the system architecture of the self-checking method for the construction and installation of the building curtain wall, firstly, sound detection signals collected by a sound detector deployed in the glass curtain wall are acquired, and then, the sound detection signals are subjected to time domain enhancement, S IFT conversion and S conversion respectively to obtain a time domain enhancement graph, an S IFT conversion time-frequency graph and an S conversion time-frequency graph of the sound detection signals. And then, aggregating the time domain enhancement graph, the SI FT transformation time-frequency graph and the S transformation time-frequency graph according to channel dimensions to obtain a multi-channel time-frequency graph, and enabling the multi-channel time-frequency graph to pass through a convolutional neural network model comprising a plurality of mixed convolutional layers and a parallel weight distribution module to obtain a time-frequency characteristic graph. And finally, passing the time-frequency characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for representing whether the glass curtain wall has defects or not.
FIG. 11 illustrates a flow chart of a training phase for training the convolutional neural network model and the classifier in the self-inspection method for the construction of the installed building curtain wall according to the embodiment of the application. As shown in fig. 11, in a specific embodiment of the present application, a training phase for training the convolutional neural network model and the classifier is further included; wherein the training phase comprises: s210, acquiring training data, wherein the training data comprises training sound detection signals and true values of whether the glass curtain wall has defects or not; s220, calculating a training time domain enhancement graph, a training S IFT transformation time-frequency graph and a training S transformation time-frequency graph of the training sound detection signal; s230, aggregating the training time domain enhancement graph, the training SI FT transformation time-frequency graph and the training S transformation time-frequency graph according to channel dimensions to obtain a training multi-channel time-frequency graph; s240, passing the training multichannel time-frequency graph through the convolutional neural network model comprising the plurality of mixed convolutional layers and the parallel weight distribution module to obtain a training time-frequency characteristic graph; s250, enabling the training time-frequency characteristic graph to pass through the classifier to obtain a classification loss function value; and S260, training the convolutional neural network model and the classifier by gradient descent back propagation based on the classification loss function values, wherein in each iteration of the training, iteration is performed on training time-frequency feature vectors obtained by expanding the training time-frequency feature map based on the weight matrix of the classifier before and after each iteration updating.
Fig. 12 is a schematic diagram illustrating a system architecture of a training stage for training the convolutional neural network model and the classifier in the self-inspection method for the building curtain wall for construction installation according to the embodiment of the application. As shown in fig. 12, in the system architecture of the training phase for training the convolutional neural network model and the classifier according to the embodiment of the present application, first, training data is obtained, where the training data includes a training sound detection signal. And then, respectively carrying out time domain enhancement, S IFT (inverse Fourier transform) transformation and S transformation on the training sound detection signal to obtain a training time domain enhancement diagram, a training S IFT transformation time-frequency diagram and a training S transformation time-frequency diagram of the training sound detection signal. And then, aggregating the training time domain enhancement graph, the training SIFT transformation time-frequency graph and the training S transformation time-frequency graph according to channel dimensions to obtain a training multi-channel time-frequency graph, and enabling the training multi-channel time-frequency graph to pass through the convolutional neural network model comprising a plurality of mixed convolutional layers and a parallel weight distribution module to obtain a training time-frequency characteristic graph. And finally, the training time-frequency characteristic graph passes through the classifier to obtain a classification loss function value, and the convolutional neural network model and the classifier are trained through backward propagation of gradient descent based on the classification loss function value.
In a specific embodiment of the present application, in each iteration of the training, a training time-frequency feature vector obtained by developing the training time-frequency feature map is iterated based on a weight matrix of the classifier before and after each iteration update according to the following formula, where the formula is:
Figure BDA0004004123550000161
wherein V represents the training time-frequency feature vector obtained by expanding the training time-frequency feature diagram, M 1 And M 2 Respectively representing the weight matrix of the classifier before and after each iteration update, | · | | survival 0 Which represents the zero norm of the vector,
Figure BDA0004004123550000162
indicating a position-wise addition, <' > or>
Figure BDA0004004123550000163
Represents subtraction by position, and->
Figure BDA0004004123550000164
Representing matrix multiplication, exp (-) represents exponential operation.
Here, it can be understood by those skilled in the art that the detailed operations of the respective steps in the self-inspection method for construction-installed building curtain walls described above have been described in detail in the description of the building curtain walls for construction-installation with reference to fig. 1 to 8, and thus, a repetitive description thereof will be omitted.

Claims (10)

1. A building curtain wall for construction installation, comprising:
the sound detection unit is used for acquiring sound detection signals collected by a sound detector arranged in the glass curtain wall;
the time-frequency conversion unit is used for calculating a time-domain enhancement graph, a SIFT transformation time-frequency graph and an S transformation time-frequency graph of the sound detection signal;
the time-frequency diagram channel aggregation unit is used for aggregating the time-domain enhancement diagram, the SIFT transform time-frequency diagram and the S transform time-frequency diagram according to channel dimensions to obtain a multi-channel time-frequency diagram;
the time-frequency graph feature extraction unit is used for enabling the multi-channel time-frequency graph to pass through a convolutional neural network model comprising a plurality of mixed convolutional layers and a parallel weight distribution module to obtain a time-frequency feature graph; and the defect self-inspection result generating unit is used for enabling the time-frequency characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the glass curtain wall has defects or not.
2. The building curtain wall for construction installation as claimed in claim 1, wherein the time-frequency conversion unit is further configured to: carrying out S transformation on the sound detection signal according to the following formula to obtain an S transformation time-frequency diagram;
wherein the formula is:
Figure FDA0004004123540000011
wherein S (f, τ) represents the S-transform time-frequency diagram, τ is a time shift factor, x (t) represents the sound detection signal, f represents frequency, and t represents time.
3. The building curtain wall for construction and installation as claimed in claim 2, wherein the time-frequency diagram feature extraction unit comprises:
a multi-scale convolutional coding subunit, configured to input the multi-channel time-frequency map into a plurality of hybrid convolutional layers of the convolutional neural network model to output a depth time-frequency feature map from a last hybrid convolutional layer of the plurality of hybrid convolutional layers; and the multi-dimensional feature aggregation subunit is used for inputting the depth time-frequency feature map into a parallel weight distribution module of the convolutional neural network model to obtain the time-frequency feature map, wherein the parallel weight distribution module comprises a space attention branch and a channel attention branch which are parallel.
4. The architectural curtain wall for construction installation of claim 3, wherein the multi-scale convolutional encoding subunit comprises:
the first scale convolution secondary subunit is used for carrying out convolution coding on the multichannel time-frequency graph by using a first convolution core with a first size so as to obtain a first scale feature graph;
the second scale convolution secondary subunit is used for carrying out convolution coding on the multichannel time frequency graph by using a second convolution core with the first hole rate to obtain a second scale feature graph;
the second scale convolution secondary subunit is used for carrying out convolution coding on the multi-channel time-frequency graph by using a second convolution core with a second void rate to obtain a second scale feature graph;
a fourth scale convolution secondary subunit configured to perform convolution encoding on the multichannel time-frequency graph by using a fourth convolution kernel with a third hole rate to obtain a fourth scale feature graph, wherein the first convolution kernel, the second convolution kernel, the third convolution kernel, and the fourth convolution kernel have the same size, and the second convolution kernel, the third convolution kernel, and the fourth convolution kernel have different hole rates; and the multi-scale aggregation secondary subunit is used for aggregating the first scale feature map, the second scale feature map, the third scale feature map and the fourth scale feature map along the channel dimension to obtain an aggregation feature map.
5. The architectural curtain wall for construction installation of claim 4, wherein the multi-dimensional feature aggregation sub-unit comprises:
the channel attention secondary subunit is used for inputting the depth time-frequency feature map into a channel attention branch of the parallel weight distribution module to obtain a channel attention feature map;
the spatial attention secondary subunit is used for inputting the depth time-frequency feature map into a spatial attention branch of the parallel weight distribution module to obtain a spatial attention feature map; and the multi-dimensional attention aggregation secondary subunit is used for adding the channel attention feature map and the space attention feature map according to position points to obtain the time-frequency feature map.
6. The architectural curtain wall for construction installation of claim 5, wherein the channel attention secondary subunit comprises:
the global mean pooling three-level subunit is used for performing global mean pooling along channel dimensions on the depth time-frequency feature map to obtain a channel feature vector;
the normalization three-level subunit is used for enabling the channel characteristic vector to pass through a Softmax function so as to obtain a normalization channel characteristic vector; and the channel attention applying three-level subunit is used for weighting the feature matrix of the depth time-frequency feature map along the channel dimension by taking the feature value of each position in the normalized channel feature vector as a weight so as to obtain the channel attention feature map.
7. The architectural curtain wall for construction installation of claim 6, wherein the space attention secondary subunit comprises:
the convolution coding three-level subunit is used for carrying out convolution coding on the depth time-frequency characteristic graph by using a convolution layer of a space attention branch of the parallel weight distribution module so as to obtain a convolution characteristic graph;
a probabilistic three-level subunit, configured to pass the spatial attention map through a Softmax function to obtain a spatial attention score map; and the space attention applying tertiary subunit is used for multiplying the space attention score map and the depth time-frequency feature map according to position points to obtain a space attention feature map.
8. The building curtain wall for construction and installation as claimed in claim 7, wherein the defect self-inspection result generation unit is further configured to: processing the time-frequency characteristic graph by using the classifier according to the following formula to obtain a first classification result;
wherein the formula is: o = softmax { (W) c ,B c ) L Project F), where Project (F) represents the projection of the time-frequency feature map as a vector, W c As a weight matrix, B c Representing a bias vector.
9. The architectural curtain wall for construction installation of claim 1, further comprising a training module for training the convolutional neural network model and the classifier;
wherein the training module comprises:
the training data acquisition unit is used for acquiring training data, wherein the training data comprises a training sound detection signal and a true value of whether the glass curtain wall has defects;
the training time-frequency conversion unit is used for calculating a training time-domain enhancement graph, a training SIFT transformation time-frequency graph and a training S transformation time-frequency graph of the training sound detection signal;
the training time-frequency graph channel aggregation unit is used for aggregating the training time-domain enhancement graph, the training SIFT transformation time-frequency graph and the training S transformation time-frequency graph according to channel dimensions to obtain a training multi-channel time-frequency graph;
the training time-frequency graph feature extraction unit is used for enabling the training multichannel time-frequency graph to pass through the convolutional neural network model comprising the multiple mixed convolutional layers and the parallel weight distribution module to obtain a training time-frequency feature graph;
the classification loss unit is used for enabling the training time-frequency characteristic diagram to pass through the classifier to obtain a classification loss function value; and the training unit is used for training the convolutional neural network model and the classifier by gradient descent back propagation based on the classification loss function values, wherein in each iteration of the training, training time-frequency feature vectors obtained by expanding the training time-frequency feature map are iterated based on weight matrixes of the classifier before and after each iteration updating.
10. The architectural curtain wall for construction installation of claim 9, wherein in each iteration of the training, a training time-frequency feature vector developed from the training time-frequency feature map is iterated based on a weight matrix of the classifier before and after each iteration update according to the following formula:
Figure FDA0004004123540000031
wherein V represents the training time-frequency feature vector obtained by expanding the training time-frequency feature diagram, M 1 And M 2 Respectively representing the weight matrix of the classifier before and after each iteration update, | · | | survival 0 Which represents the zero norm of the vector,
Figure FDA0004004123540000041
indicating a position-wise addition, <' > or>
Figure FDA0004004123540000042
Represents subtraction by position, and->
Figure FDA0004004123540000043
Representing matrix multiplication and exp (-) representing exponential operation. />
CN202211625374.3A 2022-12-16 2022-12-16 Building curtain wall for construction and installation Pending CN115853173A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211625374.3A CN115853173A (en) 2022-12-16 2022-12-16 Building curtain wall for construction and installation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211625374.3A CN115853173A (en) 2022-12-16 2022-12-16 Building curtain wall for construction and installation

Publications (1)

Publication Number Publication Date
CN115853173A true CN115853173A (en) 2023-03-28

Family

ID=85673775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211625374.3A Pending CN115853173A (en) 2022-12-16 2022-12-16 Building curtain wall for construction and installation

Country Status (1)

Country Link
CN (1) CN115853173A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116392930A (en) * 2023-04-24 2023-07-07 浙江浙能迈领环境科技有限公司 Ship tail gas desulfurization process and system thereof
CN116678506A (en) * 2023-08-02 2023-09-01 国检测试控股集团南京国材检测有限公司 Wireless transmission heat loss detection device
CN116838114A (en) * 2023-07-06 2023-10-03 同创华建集团有限公司 Steel construction and curtain intelligent monitoring system based on data analysis

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116392930A (en) * 2023-04-24 2023-07-07 浙江浙能迈领环境科技有限公司 Ship tail gas desulfurization process and system thereof
CN116392930B (en) * 2023-04-24 2023-08-29 浙江浙能迈领环境科技有限公司 Ship tail gas desulfurization process and system thereof
CN116838114A (en) * 2023-07-06 2023-10-03 同创华建集团有限公司 Steel construction and curtain intelligent monitoring system based on data analysis
CN116838114B (en) * 2023-07-06 2024-01-23 同创华建集团有限公司 Steel construction and curtain intelligent monitoring system based on data analysis
CN116678506A (en) * 2023-08-02 2023-09-01 国检测试控股集团南京国材检测有限公司 Wireless transmission heat loss detection device
CN116678506B (en) * 2023-08-02 2023-10-10 国检测试控股集团南京国材检测有限公司 Wireless transmission heat loss detection device

Similar Documents

Publication Publication Date Title
CN110188705B (en) Remote traffic sign detection and identification method suitable for vehicle-mounted system
CN115853173A (en) Building curtain wall for construction and installation
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN112926457B (en) SAR image recognition method based on fusion frequency domain and space domain network model
CN113674140B (en) Physical countermeasure sample generation method and system
CN107844743A (en) A kind of image multi-subtitle automatic generation method based on multiple dimensioned layering residual error network
CN111340034B (en) Text detection and identification method and system for natural scene
CN113505792B (en) Multi-scale semantic segmentation method and model for unbalanced remote sensing image
CN115564766A (en) Method and system for preparing volute casing seat ring of water turbine
CN116167989A (en) Intelligent production method and system for aluminum cup
CN115601318A (en) Intelligent production method and system for fast-absorption low-reverse-osmosis paper diaper
CN112270285B (en) SAR image change detection method based on sparse representation and capsule network
CN113989631A (en) Infrared image target detection network compression method based on convolutional neural network
CN115222998A (en) Image classification method
CN114220145A (en) Face detection model generation method and device and fake face detection method and device
Nie et al. Fast ship contour extraction in SAR images
Liu et al. Target detection of hyperspectral image based on faster R-CNN with data set adjustment and parameter turning
CN117475236B (en) Data processing system and method for mineral resource exploration
CN112446267B (en) Setting method of face recognition network suitable for front end
Li et al. Single image defogging method based on improved generative adversarial network
Zhao et al. Lightweight Smoke Recognition Based on Deep Convolution and Self-Attention
Liu et al. Hyperspectral Image Classification Based on Convolutional Neural Network Embedded with Attention Mechanism and Shadow Enhancement by Dynamic Stochastic Resonance
Liu et al. Research on traffic sign detection algorithm in complex weather
Tong et al. Patch-based Semantically Enhanced Network for IR Dim and Small Targets Background Suppression
Cao et al. TFCD-Net: Target and False Alarm Collaborative Detection Network for Infrared Imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination