Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The invention provides a novel technical scheme for enhancing top-level feature information of a subway tunnel surface disease image based on a feature pyramid network, which is shown in a combined figure 1 and mainly comprises the following aspects.
As shown in the lower left dashed box part of fig. 1, 4 original feature layers, C2, C3, C4 and C5, are extracted by using, for example, a feature pyramid structure in the depth residual network ResNet-101 as a basic feature extraction framework. Where C5 is the top-level feature map of the entire pyramid feature extraction network, which has the largest number of feature channels (e.g., 2048 channels).
As shown in the upper left dotted line frame part of fig. 1 and fig. 2, in order to reduce the serious problem of channel semantic information loss in the interlayer feature fusion performed from top to bottom in the top-to-bottom in the top-level semantic feature map in the conventional feature pyramid, the invention proposes that a feature channel weight calculation module and a sample mark truth map similarity distance calculation module are comprehensively utilized for the top-level feature map in the feature pyramid, so as to realize the enhanced learning of the top-level semantic feature information.
As shown in the dotted line frame part on the right side in fig. 1, the top layer semantic feature enhancement method is used for performing enhancement learning on the top layer feature map, replacing the original top layer feature map with the learning result, and performing top-down interlayer feature fusion; converting the image mark true value image size to the same size of each interlayer characteristic image by utilizing a down sampling technology; and establishing a deep learning error loss function by using a cross entropy function, calculating the error amount of a pixel predicted value in the characteristic image and the image mark true value image, and then reversely propagating the error amount to update the network parameters of each module in each layer of network.
Through continuous iterative learning, a deep learning detection and identification model of the tunnel surface diseases is finally obtained through training, and the deep learning detection and identification model can be applied to new tunnel surface disease image detection and identification.
Specifically, referring to fig. 3, the method for enhancing the semantic features of the top layer of the surface defect image of the subway tunnel includes the following steps.
And step S110, extracting a multilayer characteristic diagram of the image by using the pyramid structure model.
As shown in the left part of fig. 4, in one embodiment, a deep residual error network ResNet-101 is used as a basic feature extraction module, and the ResNet-101 is composed of two basic blocks, Conv Block and Identity Block, which are alternately connected in series, and have 101 layers. The structure of two basic blocks is shown in fig. 5, wherein Conv2D, BatchNorm and ReLu represent convolution, batch normalization and ReLu activation functions, respectively. After the network extraction features are extracted by a ResNet-101 backbone, 4 original feature maps of C2, C3, C4 and C5 are respectively generated, wherein C2 is a bottom-layer feature map, and C5 is a top-layer feature map. Taking the input image size of 1024 × 3 as an example, the sizes of C2, C3, C4, and C5 are 256 × 256, 128 × 512, 64 × 1024, and 32 × 2048, respectively.
And step S120, performing reinforcement learning on the top layer features.
As shown in the "top-level Feature-enhanced learning" section in the lower left side of fig. 4, the top-level Feature map C5 extracted from the pyramid backbone network is calculated by a Feature enhancement module (FE-Block) and assigned with weights for the respective channel features of C5. The FE-Block module mainly comprises a channel self-attention mechanism and a sample mark truth diagram.
In one embodiment, reinforcement learning of top-level features includes:
and step S121, enhancing the top semantic feature map by utilizing a channel self-attention mechanism.
The importance of the content of each channel of the top level feature map is learned by using a channel self-attention calculation method for the top level feature map extracted by the pyramid backbone network, and the specific structure of the top level feature map is shown in fig. 6.
Specifically, the input features (i.e., the input top level feature map) are globally pooled, the relationship among the channels is further learned through full-connection operation, different weights of each channel in the top level feature map are obtained, and finally the channel weights are multiplied by the input features of the original top level feature map to obtain a top level feature map channel enhanced output result. This process can be expressed as:
wherein F is the top-level input feature, gp(. is a global pooling layer, fc(. cndot.) is the fully connected layer, and W is the channel weight.
And step S122, enhancing the top semantic feature map by using the sample mark truth map.
As shown in fig. 7, a similar distance between the disease sample mark truth map information and each channel feature matrix of the top-level feature map is calculated, weights of all channels in the top-level feature map are determined according to the similar distance, and a calculation formula of the sample mark similar weights is represented as:
Lb=fs[Υ(F,Fb)] (2)
wherein F is a top-level input feature, FbMarking a true value graph for the down-sampled sample, upsilon (·,) is a characteristic graph Euclidean distance calculation function, fs(. cndot.) is a feature map similarity weight coefficient normalization function, such as a cosine similarity function.
And step S123, generating a feature map with enhanced semantic features.
And multiplying the calculated characteristic channel weight matrix and the target mark similar weight matrix by the original top-level characteristic map C5 together to generate a characteristic map S5 with enhanced semantic characteristics, and replacing the original C5 with the characteristic map S5 to perform a subsequent top-down characteristic fusion process.
And step S130, fusing top-down interlayer features.
After a C2-C4 feature map extracted by the pyramid backbone network and a top-level feature map S5 enhanced by top-level semantic features are obtained, the number of channels of all features is reduced to 256, wherein P5 is generated after channel dimension reduction of S5, and then top-down feature fusion is performed. The specific process is shown in a 'feature fusion' part of fig. 4, and comprises the following steps:
step S131, first, up-sampling P5 to make its size the same as the size of C4 feature map (64 × 256) after channel dimensionality reduction, and adding the two to generate P4;
step S132, upsampling P4 to make the size of the upsampled P4 be the same as the size of the C3 feature map subjected to channel dimensionality reduction (128 × 256), and adding the two to generate P3;
in step S133, P3 is upsampled to the same size as the C2 feature map size (256 × 256) after channel dimensionality reduction, and added to generate P2.
Finally, the P2, P3, P4 and P5 subjected to interlayer feature fusion are used as a prediction target feature map output by the whole deep learning network.
Step S140, training is performed with the set error loss function as a constraint.
And the predicted feature maps P2, P3 and P4 generated by the feature pyramid structure and the top-level feature map P5 after feature enhancement and channel dimensionality reduction are used as reference bases for calculating the error loss function of the feature map of the whole network. And amplifying each prediction characteristic graph to the size of the original input image, and calculating the error loss corresponding to each prediction characteristic graph according to the following formula.
Wherein y is a sample labeled binary image of the input image; and P is a predicted target feature map generated by the pyramid structure. From equation (3), the overall error loss function generated by the prediction feature map for all training samples can be determined, as:
equation (4) can be generally expressed as:
where N is the number of predicted feature maps plus 1.
Based on an error loss function, through continuous iterative learning, the tunnel surface disease deep learning detection and identification model obtained through final training can be applied to new tunnel surface disease image detection and identification, and the identified tunnel diseases include but are not limited to deformation invasion limit, cracks, water leakage, slab staggering, chipping, collapse, foundation grout pumping, sinking, bottom bulging, lining back holes and the like.
In summary, the present invention focuses on the reinforcement learning of the feature pyramid top feature map in the deep learning network, and compared with the prior art, the present invention has at least the following advantages:
1) considering that the top-level feature map in the traditional feature pyramid network has the characteristics of small scale, multiple channels and rich semantic feature information after being subjected to convolution pooling for multiple times, in feature fusion among feature pyramid layers, because the top-level feature map needs to be subjected to channel dimensionality reduction, a large amount of feature information beneficial to disease object identification is lost. The invention provides a method for calculating the importance of each channel of a top-level feature map by using a feature channel weight calculation module, so that the important feature channels are easier to reserve, and the suppression of background clutter is realized.
2) The boundary of the leakage water disease shows obvious characteristics of the leakage rule, and the detection precision is reduced because the boundary is easy to lose in the disease detection by the existing deep learning method. The invention further provides a method for calculating the similar distance between the sample mark true value graph and the top semantic feature graph, so that the weight distribution of the deep learning network model to the key feature channel of the water leakage area is enhanced, and the loss of the feature information of the whole area of the water leakage in pyramid feature fusion is reduced.
It should be noted that, without departing from the spirit and scope of the present invention, those skilled in the art may make appropriate changes or modifications to the above-described embodiments, for example, in addition to using a deep residual error network as a basic feature extraction module, other types of network models may also be used, and the present invention does not limit the number of layers for extracting feature maps, the size of convolution kernels, the dimension of feature maps, and the like.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + +, Python, or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.