CN117037143A - Prestress component performance data analysis method and system based on image processing - Google Patents

Prestress component performance data analysis method and system based on image processing Download PDF

Info

Publication number
CN117037143A
CN117037143A CN202311256873.4A CN202311256873A CN117037143A CN 117037143 A CN117037143 A CN 117037143A CN 202311256873 A CN202311256873 A CN 202311256873A CN 117037143 A CN117037143 A CN 117037143A
Authority
CN
China
Prior art keywords
image
component
data
training data
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311256873.4A
Other languages
Chinese (zh)
Other versions
CN117037143B (en
Inventor
万俊飞
杨尚荣
段鑫朋
李云鹏
高洋
田淇
吴志翰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Special Economic Zone Construction Engineering Technology Group Shengteng Technology Co ltd
Original Assignee
Shenzhen Special Economic Zone Construction Engineering Technology Group Shengteng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Special Economic Zone Construction Engineering Technology Group Shengteng Technology Co ltd filed Critical Shenzhen Special Economic Zone Construction Engineering Technology Group Shengteng Technology Co ltd
Priority to CN202311256873.4A priority Critical patent/CN117037143B/en
Publication of CN117037143A publication Critical patent/CN117037143A/en
Application granted granted Critical
Publication of CN117037143B publication Critical patent/CN117037143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/14Force analysis or force optimisation, e.g. static or dynamic forces
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a prestress component performance data analysis method and system based on image processing, which are characterized in that component three-dimensional image training data of prestress components of each sample are subjected to image blocking to generate a plurality of component image blocking training data, after key image connected domain data are extracted, semantic performance vectors and non-semantic performance vectors of the component image blocking training data are extracted by utilizing a component performance estimation network, the non-semantic performance vectors and the semantic performance vectors are converged, the component image blocking training data are subjected to performance estimation by the component performance estimation network to obtain component performance estimation indexes, component performance estimation error values are calculated according to distinguishing information between component performance marking indexes and component performance estimation indexes, and prediction accuracy is improved by optimizing the component performance estimation network, so that component performance estimation tasks are conveniently executed, and the accuracy and efficiency of performance estimation can be improved.

Description

Prestress component performance data analysis method and system based on image processing
Technical Field
The embodiment of the application relates to the technical field of engineering, in particular to a prestress component performance data analysis method and system based on image processing.
Background
The prestress member, i.e. the prestress concrete structure, is manually pressed before the structural member is acted by external force load, so that the generated prestress state is used for reducing or counteracting the tensile stress caused by the external load, namely, the defect of tensile strength is overcome by means of higher compressive strength of the concrete, and the aim of delaying the cracking of the concrete in a tension area is fulfilled. The structure made of prestressed concrete is also called a prestressed reinforced concrete structure because the prestress is achieved by a method of stretching the steel bars.
In the engineering field, the method has important significance for performance estimation and monitoring of the prestress component. Traditional component performance evaluation methods rely mainly on experimental testing and numerical simulation analysis, which generally require a great deal of time, resources and cost, and have limitations on accuracy and efficiency.
Disclosure of Invention
In order to at least overcome the above-mentioned shortcomings in the prior art, an object of an embodiment of the present application is to provide a method and a system for analyzing performance data of a prestressed component based on image processing.
According to an aspect of the embodiment of the present application, there is provided a method for analyzing performance data of a prestressed member based on image processing, including:
Image segmentation is carried out on the component three-dimensional image training data of the prestress component of each sample, a plurality of component image segmentation training data are generated, and key image connected domain data of each component image segmentation training data are extracted;
executing semantic performance vector extraction based on corresponding key image connected domain data and a first gating unit on each component image block training data according to a component performance estimation network to generate semantic performance vectors of each component image block training data, wherein the first gating unit is configured to select feature inflow of the key image connected domain data in the semantic performance vector extraction, and the semantic performance vectors comprise at least one of crack feature vectors, prestressed tendon feature vectors, component size feature vectors and destruction mode feature vectors;
performing non-semantic performance vector extraction based on a second gating unit on key image connected domain data of each component image block training data according to the component performance estimation network, and generating a non-semantic performance vector of each component image block training data, wherein the second gating unit is configured to select feature inflow of the key image connected domain data in the non-semantic performance vector extraction, and the non-semantic performance vector comprises at least one of a strain distribution feature vector, a component geometric feature vector and a damaged area feature vector;
Converging the non-semantic performance vector of each component image block training data and the semantic performance vector of each component image block training data according to the component performance estimation network, and performing component performance estimation based on the corresponding convergence vector on each component image block training data to generate a component performance estimation index of each component image block training data;
calculating a component performance estimation error value according to distinguishing information between component performance marking indexes of the component image block training data and component performance estimation indexes of the component image block training data, and optimizing the component performance estimation network according to the component performance estimation error value.
In a possible implementation manner of the first aspect, the extracting key image connected domain data of each component image segmentation training data includes:
acquiring target segmentation image data of the three-dimensional image training data of the component in each image segmentation dimension;
blending the target segmentation image data of the component three-dimensional image training data in a plurality of image segmentation dimensions to generate key image connected domain data of the component three-dimensional image training data;
And extracting the key image connected domain data of the component image block training data from the key image connected domain data of the component three-dimensional image training data.
In a possible implementation manner of the first aspect, the blending the target segmented image data of the component three-dimensional image training data in a plurality of image segmentation dimensions to generate key image connected domain data of the component three-dimensional image training data includes:
acquiring one or more key stress areas of the three-dimensional image training data of the component;
for each image segmentation dimension, converging the target segmentation image data of the three-dimensional image training data of the component in the image segmentation dimension corresponding to the one or more key stress areas to generate target segmentation image data of each key stress area;
the following steps are performed for each of the image segmentation dimensions:
performing regularization conversion on target segmented image data of the image segmentation dimension in each key stress region to generate regularized image features of the image segmentation dimension in each key stress region;
the following steps are performed for each key stress region:
Collecting regularized image features of the plurality of image segmentation dimensions in the key stress area to generate collected image features of the three-dimensional image training data of the component in the key stress area;
and forming the key image connected domain data of the three-dimensional image training data of the component by the convergent image features of the key stress areas.
In a possible implementation manner of the first aspect, the aggregating the target segmented image data of the three-dimensional image training data of the component in the image segmentation dimension corresponding to the one or more key stress regions, generating target segmented image data of each of the key stress regions, includes:
extracting target segmented image data at each key stress pixel cell region from the component three-dimensional image training data in the target segmented image data of the image segmentation dimension;
and converging target segmentation image data of all the key stress pixel unit areas belonging to the key stress areas aiming at the key stress areas to generate target segmentation image data corresponding to the key stress areas.
In a possible implementation manner of the first aspect, the regularizing the target segmented image data of the image segmentation dimension in each of the key stress regions to generate regularized image features of the image segmentation dimension in each of the key stress regions includes:
Obtaining salient image features in target segmentation image data of a plurality of key stress areas, wherein the salient image features are used for representing image features, wherein differences between the image features and other image features with the same attribute in the target segmentation image data are larger than set differences;
and regarding each key stress region, taking the multiplication characteristic of the target segmentation image data of the key stress region and the saliency image characteristic as the regularized image characteristic of the key stress region.
In a possible implementation manner of the first aspect, the aggregating the regularized image features of the plurality of image segmentation dimensions in the key stress region to generate an aggregate image feature of the component three-dimensional image training data in the key stress region includes:
extracting extremum image features of the regularized image features of the image segmentation dimensions in the key stress region to generate extremum regularized image features;
carrying out mean value calculation on regularized image features of the image segmentation dimensions in the key stress areas to generate regularized image average features;
and carrying out mean value calculation on the extremum regularized image features and the regularized image average features to generate the converged image features of the component three-dimensional image training data in the key stress region.
In a possible implementation manner of the first aspect, the performing, according to the component performance estimation network, semantic performance vector extraction based on corresponding key image connected domain data and a first gating unit on each component image block training data, to generate a semantic performance vector of each component image block training data includes:
and carrying out the following steps on the block training data of each component image according to the component performance estimation network:
extracting semantic related feature vectors of the component image block training data to generate first visual detection features and first non-visual detection features of the component image block training data;
when a first gating node of the first gating unit is activated, performing first visual detection on the component image block training data to generate a second visual detection feature, and performing first non-visual detection on the component image block training data to generate a second non-visual detection feature;
deep learning feature extraction is carried out on the key image connected domain data, and a first image deep learning feature is generated;
performing region weight distribution on the first visual detection feature and the first image deep learning feature to generate a first region attention feature sequence, and performing region weight distribution on the first non-visual detection feature and the first image deep learning feature to generate a second region attention feature sequence;
Performing association configuration on the first region attention feature sequence and the second region attention feature sequence to generate a first target attention feature sequence, and converging the first target attention feature sequence, the second non-visual detection feature and the second visual detection feature to generate interaction features of the component image block training data;
performing relevance configuration on the first visual detection feature, the first non-visual detection feature and the interaction feature to generate a target detection feature of the component image block training data;
performing first self-coding on target detection characteristics of the component image block training data to generate a first self-coding vector;
performing peak feature downsampling on the first self-coded vector to generate peak feature downsampling data, and performing mean feature downsampling on the first self-coded vector to generate mean feature downsampling data;
and adding the peak characteristic downsampling data and the mean characteristic downsampling data, and activating the added characteristic downsampling data to generate the semantic performance vector of the component image block training data.
In a possible implementation manner of the first aspect, the second gating unit includes a second gating node and a third gating node, the performing, according to the component performance estimation network, non-semantic performance vector extraction based on the second gating unit on key image connected domain data of each component image block training data, generating a non-semantic performance vector of each component image block training data includes:
the following steps are carried out on key image connected domain data of the component image segmentation training data according to the component performance estimation network:
when the second gating node is in an activated state, carrying out combined feature detection on the key image connected domain data to generate a first combined feature detection result, wherein a strain distribution feature vector is calculated by analyzing pixel value differences or gray level changes in the key image connected domain data, a component geometric feature vector is determined by extracting area, perimeter, roundness, aspect ratio features of component image block training data and relative position information of connected domains, and a damaged area feature vector is extracted by analyzing pixel value distribution or texture features of a damaged area, wherein the texture features comprise area, shape, texture statistical information and boundary shape features of the damaged area;
And obtaining a gating parameter influence coefficient value of the third gating node, and weighting the gating parameter influence coefficient value and the first combined feature detection result to generate a non-semantic performance vector of the component image block training data.
In a possible implementation manner of the first aspect, the method further includes:
acquiring three-dimensional image data of a target component, and performing image blocking on the three-dimensional image data of the target component to generate a plurality of target component image blocking data;
when the target component image blocking data has key image connected domain data, forward transmitting each target component image blocking data and corresponding key image connected domain data in a component performance estimation network which is optimized, and generating component performance estimation indexes of each target component image blocking data;
the first gating unit and the second gating unit for the key image connected domain data in the optimized component performance estimation network are in an activated state;
when the target component image blocking data does not have the key image connected domain data, forward transmitting each target component image blocking data in a component performance estimation network which is optimized, and generating a component performance estimation index of each target component image blocking data;
And the first gating unit and the second gating unit aiming at the key image connected domain data in the optimized component performance estimation network are in an inactive state.
According to one aspect of an embodiment of the present application, there is provided an image processing-based pre-stress member performance data analysis system including a processor and a machine-readable storage medium having stored therein machine-executable instructions loaded and executed by the processor to implement the image processing-based pre-stress member performance data analysis method in any one of the foregoing possible implementations.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternative implementations of the above aspects.
According to the technical scheme provided by the embodiments of the application, the component three-dimensional image training data of the prestressed component of each sample is subjected to image blocking to generate a plurality of component image blocking training data, the semantic performance vector and the non-semantic performance vector of the component image blocking training data are extracted by utilizing the component performance estimation network after the key image connected domain data are extracted, the non-semantic performance vector and the semantic performance vector are converged, the component image blocking training data are subjected to performance estimation by the component performance estimation network to obtain component performance estimation indexes, component performance estimation error values are calculated according to distinguishing information between the component performance marking indexes and the component performance estimation indexes, and prediction accuracy is improved by optimizing the component performance estimation network so as to conveniently execute component performance estimation tasks. Therefore, after the number of the key image connected domains can be extracted from the three-dimensional image training data of the component, the accuracy and the efficiency of performance estimation can be improved by combining the convergence of semantic performance vectors and non-semantic performance vectors and performing performance estimation, and the method has important application value for performance estimation and monitoring of the prestress component.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, reference will be made to the accompanying drawings, which are needed to be activated in the embodiments, and it should be understood that the following drawings only illustrate some embodiments of the present application, and therefore should not be considered as limiting the scope, and other related drawings can be extracted by those skilled in the art without the inventive effort.
FIG. 1 is a schematic flow chart of a method for analyzing performance data of a prestressed component based on image processing according to an embodiment of the present application;
fig. 2 is a schematic block diagram of a prestress component performance data analysis system based on image processing for implementing the prestress component performance data analysis method based on image processing according to an embodiment of the application.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the application and is provided in the context of a particular application and its requirements. It will be apparent to those having ordinary skill in the art that various changes can be made to the disclosed embodiments and that the general principles defined herein may be applied to other embodiments and applications without departing from the principles and scope of the application. Therefore, the present application is not limited to the described embodiments, but is to be accorded the widest scope consistent with the claims.
Fig. 1 is a flowchart of a method for analyzing performance data of a prestressed component based on image processing according to an embodiment of the present application, and the method for analyzing performance data of a prestressed component based on image processing is described in detail below.
Step S110, image segmentation is carried out on the component three-dimensional image training data of the prestressed component of each sample, a plurality of component image segmentation training data are generated, and key image connected domain data of each component image segmentation training data are extracted.
In this embodiment, assuming that the load-bearing capacity of a concrete bridge is being evaluated, three-dimensional image data of a plurality of concrete bridges may be collected. First, a three-dimensional image of each bridge is segmented, and a plurality of component image segmentation training data is generated. For example, each pier may be divided into different blocks, or the deck plate may be divided into grid-like blocks, which may include different areas of the beam, such as a bearing area, a mid-span area, etc., but is not limited thereto.
Step S120, executing, according to the component performance estimation network, semantic performance vector extraction based on the corresponding key image connected domain data and the first gating unit on each component image block training data, and generating a semantic performance vector of each component image block training data.
In this embodiment, the first gating unit is configured to select a feature inflow of the key image connected domain data in the semantic performance vector extraction, where the semantic performance vector includes at least one of a crack feature vector, a tendon feature vector, a component size feature vector, and a failure mode feature vector.
For example, the fracture feature vector may refer to that the number, length, width, and other features of the fracture can be extracted by analyzing the fracture information in the key image connected domain data. For example, the number of cracks in each key image connected domain may be calculated, and the average length and width calculated.
The characteristic vector of the prestressed tendon can be that the characteristic related to the prestressed tendon can be extracted according to the key image connected domain data. This may include information about the location, number, tension, etc. of the tendons. For example, it may be determined whether a tendon is present in the critical image connected domain, and the central position, relative position, and average tension of the tendon may be calculated.
The component size feature vector may refer to that geometric features of the component can be extracted by analyzing size information of the connected domain of the key image. For example, indexes such as an area, a perimeter, an aspect ratio, and the like of the key image connected domain may be calculated.
The failure mode feature vector may refer to a feature that may extract a failure mode based on key image connected domain data and guidance of the first gating unit. This may include parameters or classification properties describing the failure mode. For example, the failure mode of the bridge may be determined to be bending, shearing, or the like, based on information such as the shape of the connected domain of the key image, the crack distribution, or the like.
It should be noted that, in the extracting process of the semantic performance vector of each component image block training data, a series of visual detection, non-visual detection, deep learning feature extraction and the like are required to be performed, so that feature interaction and convergence are performed and then output is performed, and specifically, the description of the subsequent embodiments can be referred to.
Step S130, performing non-semantic performance vector extraction based on a second gating unit on the key image connected domain data of the component image block training data according to the component performance estimation network, and generating the non-semantic performance vector of the component image block training data.
In this embodiment, the second gating unit is configured to select a feature inflow of the key image connected domain data in the non-semantic performance vector extraction, where the non-semantic performance vector includes at least one of a strain distribution feature vector, a component geometry feature vector, and a damaged region feature vector.
For example, the strain distribution feature vector may refer to a strain distribution feature vector that may be calculated by analyzing pixel value differences or gray scale variations in key image connected domain data. For example, gradients between pixels in the connected domain, edge information, or local binary patterns may be computed to capture texture and structural changes.
The component geometric feature vector may refer to that shape and size information of the component may be extracted according to key image connected domain data. For example, geometric features such as an area, a perimeter, a roundness, an aspect ratio, and the like of the connected domain may be calculated, and in combination with positional information of the connected domain, geometric features such as a relative position, a pitch, and the like of the members may be calculated. For example, the center position of the key image connected domain, the relative position to other connected domains, and the like may be determined.
The destroyed region feature vector may mean that features related to the destroyed region may be extracted by analyzing the destroyed region, such as fracture, break, failure, etc., in the key image connected domain data. For example, the area, shape, distribution, and other indices of the damaged area can be calculated.
Step S140, aggregating, according to the component performance estimation network, the non-semantic performance vector of each component image block training data and the semantic performance vector of each component image block training data, and performing component performance estimation based on the corresponding aggregate vector on each component image block training data, so as to generate a component performance estimation index of each component image block training data.
In this embodiment, the non-semantic performance vector and the semantic performance vector of the segmented training data of each component image may be aggregated, and by combining these vectors together, performance estimation may be performed on the sample prestressed component, and a component performance estimation index of the segmented training data of the component image may be generated. A specific approach may involve weighted summation of different feature vectors, concatenation or use of specific aggregation functions, etc.
Illustratively, the component performance estimation index of each of the component image patch training data may include, but is not limited to, a structural integrity index, a strength index, a deformation index, a durability index, a dynamic response index, a fatigue index, a temperature change index, and the like.
Step S150, calculating a component performance estimation error value according to the distinguishing information between the component performance marking index of each component image block training data and the component performance estimation index of each component image block training data, optimizing the component performance estimation network according to the component performance estimation error value, and executing a component performance estimation task based on the optimized component performance estimation network.
In this embodiment, the component performance estimation error value may be calculated, and the component performance estimation network may be optimized by using the error value to calculate the component performance estimation error value by comparing the difference information between the component performance labeling index (true value) and the component performance estimation index (predicted value) of the component image patch training data. These component performance estimation error values may then be used to optimize parameters or structures of the component performance estimation network to improve accuracy of the performance estimation. The optimization method may include machine learning techniques such as gradient descent and back propagation.
For example, the calculation mode a of the component performance estimation error value may be:
Loss1 = sqrt((1/n) * Σ(estimated_value - actual_value)^2)
alternatively, the calculation mode B of the component performance estimation error value may be:
Loss2 = (1/n) * Σ|estimated_value - actual_value|
where n represents the number of samples, Σ represents the summation operation, estimated_value represents the component performance index, and actual_value represents the component performance index.
Based on the above steps, in the embodiment, image blocking is performed on the component three-dimensional image training data of the prestressed component of each sample, a plurality of component image blocking training data are generated, after key image connected domain data are extracted, semantic performance vectors and non-semantic performance vectors of the component image blocking training data are extracted by using a component performance estimation network, the non-semantic performance vectors and the semantic performance vectors are converged, performance estimation is performed on the component image blocking training data by using the component performance estimation network, component performance estimation indexes are obtained, component performance estimation error values are calculated according to distinguishing information between component performance marking indexes and component performance estimation indexes, and prediction accuracy is improved by optimizing the component performance estimation network so as to conveniently execute component performance estimation tasks. Therefore, after the number of the key image connected domains can be extracted from the three-dimensional image training data of the component, the accuracy and the efficiency of performance estimation can be improved by combining the convergence of semantic performance vectors and non-semantic performance vectors and performing performance estimation, and the method has important application value for performance estimation and monitoring of the prestress component.
In an alternative embodiment, for step S110, extracting key image connected domain data of each component image segmentation training data may be implemented by the following exemplary substeps:
in a substep S111, target segmentation image data of the component three-dimensional image training data in each image segmentation dimension is acquired.
For example, the image segmentation dimension may include a defect type dimension, i.e., a component image is segmented into normal regions and different types of defect regions, such as cracks, deformations, and the like. Or a material property dimension, i.e. dividing the component image into regions with different material properties, e.g. regions of non-uniform density or different composition, or a structural health dimension, i.e. dividing the component image into different parts of the structural health, e.g. regions of reduced strength or regions of fatigue damage, thereby forming target divided image data of the component three-dimensional image training data in each image dividing dimension.
And a substep S112, blending the target segmentation image data of the component three-dimensional image training data in a plurality of image segmentation dimensions, and generating key image connected domain data of the component three-dimensional image training data.
In this embodiment, the substep S112 may include:
1. one or more key stress regions of the three-dimensional image training data of the component are acquired.
For example, critical stress areas may refer to stress concentration areas or critical areas in the sample pre-stressed member that are important for performance detection or evaluation, such as where the sample pre-stressed member is typically subjected to maximum stress, and may be specific areas related to the performance of the member.
The critical stress areas may be defined according to specific applications and requirements. In some cases, some key areas may be predefined in advance as key stress areas. Such predefined accent areas may be determined based on information such as expertise, engineering experience, or historical data.
For example, in structural health monitoring of bridges, engineers may define areas such as bridge piers and bridge supports in advance as critical stress areas according to bridge design and structural characteristics. These areas may be subjected to higher stresses when a load is applied and are critical to the stability and safety of the bridge.
The advantage of predefining critical stress areas is that critical image connected domain data for these specific critical stress areas can be extracted and analyzed targeted, thereby facilitating a more accurate subsequent assessment of the performance of the sample pre-stressed member.
2. And aiming at each image segmentation dimension, carrying out convergence corresponding to the one or more key stress areas on target segmentation image data of the three-dimensional image training data of the component in the image segmentation dimension, and generating target segmentation image data of each key stress area.
For example, the target segmented image data in each key stress pixel unit region may be extracted from the target segmented image data in the image segmentation dimension by the component three-dimensional image training data, and for each key stress region, the target segmented image data of all key stress pixel unit regions belonging to the key stress region may be aggregated (for example, may be simply pixel-level merging, or a specific rule or algorithm may be applied to aggregate the image data to generate the target segmented image data corresponding to each key stress region), and the target segmented image data corresponding to the key stress region may be generated.
3. And for each image segmentation dimension, carrying out regularization conversion on target segmentation image data of the image segmentation dimension in each key stress region, and generating regularized image features of the image segmentation dimension in each key stress region.
For example, salient image features in the target segmented image data of the plurality of key stress regions may be obtained, where the salient image features are used to represent image features in the target segmented image data that differ from other co-attributed image features by more than a set difference, that is, pixels or regions with significant variation or salient properties in the target segmented image data may be measured according to their degree of difference from surrounding image features, for example, a set difference threshold may be determined, where the set difference threshold is used to determine whether the difference between the salient image features in the target segmented image data and other co-attributed image features is sufficiently large. This set variance threshold may be set according to the specific application and requirements, reflecting to some extent the degree of variance of interest.
On the basis, for each key stress region, taking the multiplication characteristic of the target segmentation image data of the key stress region and the saliency image characteristic as the regularized image characteristic of the key stress region. Thus, the influence of the salient image features on the key stress region can be emphasized and taken as a part of the regularized image features, so that the processed regularized image features can highlight pixels or regions with significant differences in the key stress region, and the method is beneficial to better capture and represent the performance and the features of the key stress region.
4. And aiming at each key stress area, converging the regularized image features of the image segmentation dimensions in the key stress areas to generate converged image features of the three-dimensional image training data of the component in the key stress areas.
For example, extremum image feature extraction may be performed on regularized image features of the plurality of image segmentation dimensions in the key stress region, to generate extremum regularized image features. For example, for regularized image features for each critical stress region, extremum regularized image features are extracted from a plurality of image segmentation dimensions, which may include extremum-related features such as maxima, minima, peaks, valleys, and the like.
And then, carrying out mean value calculation on the regularized image features of the plurality of image segmentation dimensions in the key stress region to generate regularized image average features. For example, the method can be implemented by adding the feature values of each image segmentation dimension and dividing the feature values by the number of dimensions, and the calculated average feature of the regularized image represents the average feature of the key stress region in each dimension.
And finally, carrying out mean value calculation on the extremum regularized image features and the regularized image average features to generate the converged image features of the component three-dimensional image training data in the key stress area. For example, the extremum regularized image features and the regularized image average features are averaged, i.e., their values are added and divided by the number of features.
5. And forming the key image connected domain data of the three-dimensional image training data of the component by the convergent image features of the key stress areas.
Sub-step S113 extracts key image connected domain data of each component image block training data from key image connected domain data of the component three-dimensional image training data.
For example, matching the positions, boundaries or other features of the component image blocks from the key image connected domain data of the component three-dimensional image training data may be implemented such that the extracted key image connected domain data corresponds to the component image block data one by one.
In an alternative embodiment, for step S120, the step of performing, on each component image block training data according to the component performance estimation network, semantic performance vector extraction based on corresponding key image connected domain data and the first gating unit, and generating a semantic performance vector of each component image block training data may be implemented by the following exemplary substeps:
and a substep S121, wherein the component performance estimation network is used for extracting semantic related feature vectors of the component image block training data according to the component image block training data, so as to generate a first visual detection feature and a first non-visual detection feature of the component image block training data.
For example, for column image blocking training data in a concrete bridge scene, the column image blocking training data is first processed through a component performance estimation network. For each pillar image patch training data, semantically related feature vectors are extracted, for example, visual features such as texture, color, etc. are extracted using convolutional neural networks, and non-visual features such as crack features, tendon features, etc. are extracted using specific algorithms or models, thus obtaining first visual detection features and first non-visual detection features for each image patch.
And step S122, when a first gating node of the first gating unit is activated, performing first visual detection on the component image block training data to generate a second visual detection feature, and performing first non-visual detection on the component image block training data to generate a second non-visual detection feature.
For the activated first gating node, image object detection is performed using a first visual detection feature, for example, using an object detection algorithm such as Faster R-CNN, YOLO, etc., to identify specific objects in the concrete bridge image patch, such as cracks, tendons, etc. Meanwhile, for the component image blocking training data, non-visual target detection is performed by using a first non-visual detection feature, such as classification or regression model, to identify different component features, such as component size, failure mode, and the like. This results in a second visually detected feature and a second non-visually detected feature for each image patch.
And step S123, deep learning feature extraction is carried out on the key image connected domain data, and a first image deep learning feature is generated.
For key image connected domains (e.g., key structural parts) extracted from the concrete bridge image, deep learning features of the image connected domains are extracted using a deep learning technique such as convolutional neural network. These features may capture high-level semantic information of the image, such as structural morphology, texture, etc.
And step S124, performing region weight distribution on the first visual detection feature and the first image deep learning feature to generate a first region attention feature sequence, and performing region weight distribution on the first non-visual detection feature and the first image deep learning feature to generate a second region attention feature sequence.
For example, based on each image partition of the first visual detection feature and the first image deep learning feature, a region weight assignment method is used to assign corresponding importance weights to different regions to highlight important structural features. Also, for the first non-visual detection feature and the first image deep learning feature, a similar region weight assignment operation is performed to highlight the importance of the non-visual feature and the deep learning feature in the respective regions. Finally, a first region attention feature sequence and a second region attention feature sequence are obtained, and the sequences contain attention degree information of different regions.
For example, the region of interest may be determined based on task requirements and image characteristics, and candidate regions may be obtained based on techniques such as image segmentation, object detection, and the like. For each region, a respective importance weight is calculated from the first visual detection feature and the first image deep learning feature. The specific calculation method may be to measure the correlation degree between the feature and the target or the attention point by using similarity measures between feature vectors, such as cosine similarity, euclidean distance, and the like. Then, the calculated importance weights are normalized so that they are between 0 and 1 and the sum is 1. Common normalization methods include maximum normalization, softmax normalization, and the like. Thus, the normalized weights may be combined with the first visual detection feature and the first image deep learning feature to obtain a weighted regional attention feature, which may be combined by simple element-wise multiplication or weighted averaging, etc.
Finally, the obtained first region attention feature sequence highlights the region with higher importance, and can better capture the structural features of the target or the attention point.
Similarly, the region weights of the first non-visual detection feature and the first image deep learning feature are allocated, and the generation of the second region attention feature sequence may refer to the above operations, which are not described herein again.
And a substep S125, performing association configuration on the first region attention feature sequence and the second region attention feature sequence to generate a first target attention feature sequence, and converging the first target attention feature sequence, the second non-visual detection feature and the second visual detection feature to generate an interaction feature of the component image blocking training data.
For example, by configuring the correlation between the first region attention feature sequence and the second region attention feature sequence, an overall target attention feature sequence can be obtained, and structural features with importance are highlighted. Then, the first target attention feature sequence, the second non-visual detection feature and the second visual detection feature are subjected to a converging operation, for example, through a feature fusion method (such as convolution, connection and the like), so as to generate interactive features of the component image blocking training data.
And a sub-step S126, performing association configuration on the first visual detection feature, the first non-visual detection feature and the interaction feature, and generating a target detection feature of the component image segmentation training data.
For example, the first visual detection feature, the first non-visual detection feature, and the interactive feature may be relatedly configured to obtain a comprehensive target detection feature sequence. This feature sequence integrates visual and non-visual information in the component image segmentation training data and highlights important features associated with the object detection task.
In a substep S127, a first self-encoding is performed on the target detection feature of the component image segmentation training data, and a first self-encoding vector is generated.
For example, the target detection feature of the component image segmentation training data is encoded and decoded using a self-encoder model to obtain a first self-encoded vector. The first self-encoding vector may extract important features in the component image segmentation training data and compress the data dimensions.
And a substep S128, performing peak feature downsampling on the first self-coded vector to generate peak feature downsampled data, and performing mean feature downsampling on the first self-coded vector to generate mean feature downsampled data.
For example, a peak feature downsampling operation is performed on the first self-coded vector, e.g., the most significant feature of the first self-coded vector is selected for retention, resulting in peak feature downsampled data. And meanwhile, carrying out average value feature downsampling operation on the first self-coding vector, for example, calculating an average value of the first self-coding vector to obtain average value feature downsampled data.
And a substep S129, adding the peak feature downsampling data and the mean feature downsampling data, and activating the added feature downsampling data to generate a semantic performance vector of the component image segmentation training data.
For example, the peak feature downsampled data is added to the mean feature downsampled data to obtain a set of feature downsampled data. Then, the feature downsampled data is subjected to an activation process, such as setting a negative value to zero using an activation function (e.g., reLU), to obtain a semantic performance vector of the component image segmentation training data. The semantic performance vector may represent semantic importance and performance characteristics of the component image, that is, at least one of the foregoing crack feature vector, tendon feature vector, component size feature vector, and failure mode feature vector.
In an alternative embodiment, in step S130, the second gating unit includes a second gating node and a third gating node. Performing non-semantic performance vector extraction based on a second gating unit on key image connected domain data of each component image block training data according to the component performance estimation network, and generating a non-semantic performance vector of each component image block training data, which may be implemented by the following exemplary substeps:
and a sub-step S131, according to the component performance estimation network, performing combined feature detection on the key image connected domain data of the component image blocking training data when the second gating node is in an activated state, so as to generate a first combined feature detection result.
For example, in the present embodiment, the strain distribution feature vector may be calculated by analyzing the pixel value difference or the gradation change in the key image connected domain data, the member geometric feature vector may be determined by extracting the area, circumference, roundness, aspect ratio feature of the member image segmentation training data, and the relative position information of the connected domain, and the damaged region feature vector may be extracted by analyzing the pixel value distribution or the texture feature of the damaged region, the texture feature including the area, shape, texture statistical information, and boundary shape feature of the damaged region.
And a substep S132, obtaining a gating parameter influence coefficient value of the third gating node, and weighting the gating parameter influence coefficient value and the first combined feature detection result to generate a non-semantic performance vector of the component image block training data.
For example, the gating parameter impact coefficient values of the third gating node may represent the importance of different features in the health assessment, and the first combined feature detection result may be weighted according to these impact coefficient values, thereby generating a non-semantic performance vector describing the performance of the bridge structure.
In an alternative implementation, during the application phase of the embodiment of the present application, the steps of the following embodiment may be included.
Step S160, acquiring three-dimensional image data of a target component, and performing image blocking on the three-dimensional image data of the target component to generate a plurality of pieces of target component image blocking data.
Step S170, when the target component image blocking data has key image connected domain data, forward transmitting each target component image blocking data and corresponding key image connected domain data in the optimized component performance estimation network, and generating a component performance estimation index of each target component image blocking data.
And the first gating unit and the second gating unit aiming at the key image connected domain data in the optimized component performance estimation network are in an activated state.
When the target component image blocking data does not have the key image connected domain data, forward transmitting each target component image blocking data in the component performance estimation network which is optimized, and generating a component performance estimation index of each target component image blocking data. And the first gating unit and the second gating unit aiming at the key image connected domain data in the optimized component performance estimation network are in an inactive state.
Fig. 2 illustrates a hardware structural view of an image processing-based pre-stress component performance data analysis system 100 for implementing the image processing-based pre-stress component performance data analysis method according to an embodiment of the present application, as shown in fig. 2, the image processing-based pre-stress component performance data analysis system 100 may include a processor 110, a machine-readable storage medium 120, a bus 130, and a communication unit 140.
In an alternative embodiment, the image processing based pre-stressing member performance data analysis system 100 may be a single server or a group of servers. The server farm may be centralized or distributed (e.g., the image processing based pre-stress component performance data analysis system 100 may be a distributed system). In an alternative embodiment, the image processing based pre-stressed member performance data analysis system 100 may be local or remote. For example, the image processing based pre-stressing member performance data analysis system 100 can access information and/or data stored in the machine-readable storage medium 120 via a network. As another example, the image processing based pre-stress member performance data analysis system 100 may be directly connected to the machine readable storage medium 120 to access stored information and/or data. In an alternative embodiment, the image processing based pre-stressed member performance data analysis system 100 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, or the like, or any combination thereof.
The machine-readable storage medium 120 may store data and/or instructions. In an alternative embodiment, the machine-readable storage medium 120 may store data acquired from an external terminal. In an alternative embodiment, the machine-readable storage medium 120 may store data and/or instructions that are used by the image processing based pre-stressing member performance data analysis system 100 to perform or use the exemplary methods described herein. In alternative embodiments, machine-readable storage medium 120 may include mass storage, removable storage, volatile read-write memory, read-only memory, and the like, or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state disks, and the like. Exemplary removable memory may include flash drives, floppy disks, optical disks, memory cards, compact disks, tape, and the like.
In a specific implementation, the plurality of processors 110 execute computer executable instructions stored by the machine-readable storage medium 120, so that the processors 110 may execute the method for analyzing performance data of a prestressed component based on image processing according to the above method embodiment, the processors 110, the machine-readable storage medium 120 and the communication unit 140 are connected through the bus 130, and the processors 110 may be configured to select transceiving actions of the communication unit 140.
The specific implementation process of the processor 110 may refer to the above-mentioned embodiments of the method performed by the prestress component performance data analysis system 100 based on image processing, and the implementation principle and technical effects are similar, which are not described herein again.
In addition, the embodiment of the application also provides a readable storage medium, wherein computer executable instructions are preset in the readable storage medium, and when a processor executes the computer executable instructions, the prestress component performance data analysis method based on image processing is realized.
Similarly, it should be noted that in order to simplify the description of the present disclosure and thereby aid in understanding one or more embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof. Similarly, it should be noted that in order to simplify the description of the present disclosure and thereby aid in understanding one or more embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof.

Claims (10)

1. A method for analyzing performance data of a prestressed component based on image processing, the method comprising:
Image segmentation is carried out on the component three-dimensional image training data of the prestress component of each sample, a plurality of component image segmentation training data are generated, and key image connected domain data of each component image segmentation training data are extracted;
executing semantic performance vector extraction based on corresponding key image connected domain data and a first gating unit on each component image block training data according to a component performance estimation network to generate semantic performance vectors of each component image block training data, wherein the first gating unit is configured to select feature inflow of the key image connected domain data in the semantic performance vector extraction, and the semantic performance vectors comprise at least one of crack feature vectors, prestressed tendon feature vectors, component size feature vectors and destruction mode feature vectors;
performing non-semantic performance vector extraction based on a second gating unit on key image connected domain data of each component image block training data according to the component performance estimation network, and generating a non-semantic performance vector of each component image block training data, wherein the second gating unit is configured to select feature inflow of the key image connected domain data in the non-semantic performance vector extraction, and the non-semantic performance vector comprises at least one of a strain distribution feature vector, a component geometric feature vector and a damaged area feature vector;
Converging the non-semantic performance vector of each component image block training data and the semantic performance vector of each component image block training data according to the component performance estimation network, and performing component performance estimation based on the corresponding convergence vector on each component image block training data to generate a component performance estimation index of each component image block training data;
calculating component performance estimation error values according to distinguishing information between component performance marking indexes of the component image block training data and component performance estimation indexes of the component image block training data, optimizing a component performance estimation network according to the component performance estimation error values, and executing component performance estimation tasks based on the optimized component performance estimation network.
2. The method for analyzing performance data of a prestressed component based on image processing according to claim 1, wherein said extracting key image connected domain data of each of said component image block training data comprises:
acquiring target segmentation image data of the three-dimensional image training data of the component in each image segmentation dimension;
blending the target segmentation image data of the component three-dimensional image training data in a plurality of image segmentation dimensions to generate key image connected domain data of the component three-dimensional image training data;
And extracting the key image connected domain data of the component image block training data from the key image connected domain data of the component three-dimensional image training data.
3. The method for analyzing performance data of a prestressed component based on image processing according to claim 2, wherein said fusing said component three-dimensional image training data with target segmented image data of a plurality of said image segmentation dimensions to generate key image connected domain data of said component three-dimensional image training data, comprises:
acquiring one or more key stress areas of the three-dimensional image training data of the component;
for each image segmentation dimension, converging the target segmentation image data of the three-dimensional image training data of the component in the image segmentation dimension corresponding to the one or more key stress areas to generate target segmentation image data of each key stress area;
the following steps are performed for each of the image segmentation dimensions:
performing regularization conversion on target segmented image data of the image segmentation dimension in each key stress region to generate regularized image features of the image segmentation dimension in each key stress region;
The following steps are performed for each key stress region:
collecting regularized image features of the plurality of image segmentation dimensions in the key stress area to generate collected image features of the three-dimensional image training data of the component in the key stress area;
and forming the key image connected domain data of the three-dimensional image training data of the component by the convergent image features of the key stress areas.
4. A method of analyzing performance data of a prestressed component based on image processing according to claim 3, wherein said converging said target segmented image data of said component three-dimensional image training data in said image segmentation dimension corresponding to said one or more key stress regions, generating target segmented image data of each of said key stress regions, comprises:
extracting target segmented image data at each key stress pixel cell region from the component three-dimensional image training data in the target segmented image data of the image segmentation dimension;
and converging target segmentation image data of all the key stress pixel unit areas belonging to the key stress areas aiming at the key stress areas to generate target segmentation image data corresponding to the key stress areas.
5. The method for analyzing performance data of a prestressed component based on image processing according to claim 3, wherein said regularizing the target segmented image data of said image segmentation dimension in each of said key stress areas to generate regularized image features of said image segmentation dimension in each of said key stress areas, comprising:
obtaining salient image features in target segmentation image data of a plurality of key stress areas, wherein the salient image features are used for representing image features, wherein differences between the image features and other image features with the same attribute in the target segmentation image data are larger than set differences;
and regarding each key stress region, taking the multiplication characteristic of the target segmentation image data of the key stress region and the saliency image characteristic as the regularized image characteristic of the key stress region.
6. The method for analyzing performance data of a prestressed component based on image processing according to claim 3, wherein said aggregating regularized image features of said plurality of image segmentation dimensions in said critical stress region to generate aggregate image features of said component three-dimensional image training data in said critical stress region comprises:
Extracting extremum image features of the regularized image features of the image segmentation dimensions in the key stress region to generate extremum regularized image features;
carrying out mean value calculation on regularized image features of the image segmentation dimensions in the key stress areas to generate regularized image average features;
and carrying out mean value calculation on the extremum regularized image features and the regularized image average features to generate the converged image features of the component three-dimensional image training data in the key stress region.
7. The method according to claim 1, wherein the performing, on each of the component image block training data according to the component performance estimation network, semantic performance vector extraction based on corresponding key image connected domain data and a first gating unit to generate a semantic performance vector of each of the component image block training data includes:
and carrying out the following steps on the block training data of each component image according to the component performance estimation network:
extracting semantic related feature vectors of the component image block training data to generate first visual detection features and first non-visual detection features of the component image block training data;
When a first gating node of the first gating unit is activated, performing first visual detection on the component image block training data to generate a second visual detection feature, and performing first non-visual detection on the component image block training data to generate a second non-visual detection feature;
deep learning feature extraction is carried out on the key image connected domain data, and a first image deep learning feature is generated;
performing region weight distribution on the first visual detection feature and the first image deep learning feature to generate a first region attention feature sequence, and performing region weight distribution on the first non-visual detection feature and the first image deep learning feature to generate a second region attention feature sequence;
performing association configuration on the first region attention feature sequence and the second region attention feature sequence to generate a first target attention feature sequence, and converging the first target attention feature sequence, the second non-visual detection feature and the second visual detection feature to generate interaction features of the component image block training data;
performing relevance configuration on the first visual detection feature, the first non-visual detection feature and the interaction feature to generate a target detection feature of the component image block training data;
Performing first self-coding on target detection characteristics of the component image block training data to generate a first self-coding vector;
performing peak feature downsampling on the first self-coded vector to generate peak feature downsampling data, and performing mean feature downsampling on the first self-coded vector to generate mean feature downsampling data;
and adding the peak characteristic downsampling data and the mean characteristic downsampling data, and activating the added characteristic downsampling data to generate the semantic performance vector of the component image block training data.
8. The method according to claim 1, wherein the second gating unit includes a second gating node and a third gating node, the performing, according to the component performance estimation network, non-semantic performance vector extraction based on the second gating unit on key image connected domain data of each of the component image block training data, generating a non-semantic performance vector of each of the component image block training data, includes:
the following steps are carried out on key image connected domain data of the component image segmentation training data according to the component performance estimation network:
When the second gating node is in an activated state, carrying out combined feature detection on the key image connected domain data to generate a first combined feature detection result, wherein a strain distribution feature vector is calculated by analyzing pixel value differences or gray level changes in the key image connected domain data, a component geometric feature vector is determined by extracting area, perimeter, roundness, aspect ratio features of component image block training data and relative position information of connected domains, and a damaged area feature vector is extracted by analyzing pixel value distribution or texture features of a damaged area, wherein the texture features comprise area, shape, texture statistical information and boundary shape features of the damaged area;
and obtaining a gating parameter influence coefficient value of the third gating node, and weighting the gating parameter influence coefficient value and the first combined feature detection result to generate a non-semantic performance vector of the component image block training data.
9. The image processing-based pre-stressing member performance data analysis method according to claim 1, characterized in that the method further comprises:
acquiring three-dimensional image data of a target component, and performing image blocking on the three-dimensional image data of the target component to generate a plurality of target component image blocking data;
When the target component image blocking data has key image connected domain data, forward transmitting each target component image blocking data and corresponding key image connected domain data in a component performance estimation network which is optimized, and generating component performance estimation indexes of each target component image blocking data;
the first gating unit and the second gating unit for the key image connected domain data in the optimized component performance estimation network are in an activated state;
when the target component image blocking data does not have the key image connected domain data, forward transmitting each target component image blocking data in a component performance estimation network which is optimized, and generating a component performance estimation index of each target component image blocking data;
and the first gating unit and the second gating unit aiming at the key image connected domain data in the optimized component performance estimation network are in an inactive state.
10. An image processing-based pre-stressing member performance data analysis system, characterized in that it comprises a processor and a machine-readable storage medium having stored therein machine-executable instructions loaded and executed by the processor to implement the image processing-based pre-stressing member performance data analysis method of any one of claims 1-9.
CN202311256873.4A 2023-09-27 2023-09-27 Prestress component performance data analysis method and system based on image processing Active CN117037143B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311256873.4A CN117037143B (en) 2023-09-27 2023-09-27 Prestress component performance data analysis method and system based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311256873.4A CN117037143B (en) 2023-09-27 2023-09-27 Prestress component performance data analysis method and system based on image processing

Publications (2)

Publication Number Publication Date
CN117037143A true CN117037143A (en) 2023-11-10
CN117037143B CN117037143B (en) 2023-12-08

Family

ID=88639849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311256873.4A Active CN117037143B (en) 2023-09-27 2023-09-27 Prestress component performance data analysis method and system based on image processing

Country Status (1)

Country Link
CN (1) CN117037143B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840891A (en) * 2019-01-07 2019-06-04 重庆工程学院 A kind of intelligence strand tapered anchorage and prestressed monitoring method and detection system, terminal
WO2021261168A1 (en) * 2020-06-23 2021-12-30 オムロン株式会社 Inspection device, unit selection device, inspection method, and inspection program
CN114119973A (en) * 2021-11-05 2022-03-01 武汉中海庭数据技术有限公司 Spatial distance prediction method and system based on image semantic segmentation network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840891A (en) * 2019-01-07 2019-06-04 重庆工程学院 A kind of intelligence strand tapered anchorage and prestressed monitoring method and detection system, terminal
WO2021261168A1 (en) * 2020-06-23 2021-12-30 オムロン株式会社 Inspection device, unit selection device, inspection method, and inspection program
CN114119973A (en) * 2021-11-05 2022-03-01 武汉中海庭数据技术有限公司 Spatial distance prediction method and system based on image semantic segmentation network

Also Published As

Publication number Publication date
CN117037143B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
Tong et al. Convolutional neural network for asphalt pavement surface texture analysis
Barkhordari et al. Structural damage identification using ensemble deep convolutional neural network models
Laxman et al. Automated crack detection and crack depth prediction for reinforced concrete structures using deep learning
CN113096088B (en) Concrete structure detection method based on deep learning
CN112905997B (en) Method, device and system for detecting poisoning attack facing deep learning model
CN114169374B (en) Cable-stayed bridge stay cable damage identification method and electronic equipment
CN115660262B (en) Engineering intelligent quality inspection method, system and medium based on database application
Mohammed et al. Exploring the detection accuracy of concrete cracks using various CNN models
Wu et al. Crack detecting by recursive attention U-Net
Devereux et al. A new approach for crack detection and sizing in nuclear reactor cores
Alfaz et al. Bridge crack detection using dense convolutional network (densenet)
CN117037143B (en) Prestress component performance data analysis method and system based on image processing
CN116680639A (en) Deep-learning-based anomaly detection method for sensor data of deep-sea submersible
CN116580176A (en) Vehicle-mounted CAN bus anomaly detection method based on lightweight network MobileViT
Żarski et al. KrakN: Transfer Learning framework for thin crack detection in infrastructure maintenance
CN115616408A (en) Battery thermal management data processing method and system
Pathak Bridge health monitoring using CNN
Wang et al. A novel concrete crack damage detection method via sparse correlation model
Ehtisham et al. Classification of defects in wooden structures using pre-trained models of convolutional neural network
Osei et al. A machine learning-based structural load estimation model for shear-critical RC beams and slabs using multifractal analysis
CN117058432B (en) Image duplicate checking method and device, electronic equipment and readable storage medium
CN113326509B (en) Method and device for detecting poisoning attack of deep learning model based on mutual information
Lin et al. House Inspection System Using Artificial Intelligence for Crack Identification
Harini et al. Microwave Imaging Based Damage Detection in columns Using Artificial Neural Network
MATONO et al. Bridge Point Cloud Completion Using Deep Learning Obtained in Actual Bridge Structures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant