CN111652231A - Casting defect semantic segmentation method based on feature adaptive selection - Google Patents
Casting defect semantic segmentation method based on feature adaptive selection Download PDFInfo
- Publication number
- CN111652231A CN111652231A CN202010473309.8A CN202010473309A CN111652231A CN 111652231 A CN111652231 A CN 111652231A CN 202010473309 A CN202010473309 A CN 202010473309A CN 111652231 A CN111652231 A CN 111652231A
- Authority
- CN
- China
- Prior art keywords
- feature
- semantic segmentation
- adaptive
- method based
- selection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a casting defect semantic segmentation method based on feature adaptive selection, which solves the problems of small difference and large scale change between casting defect images by using an adaptive depth feature fusion mechanism and an adaptive receptive field selection module provided by the invention, so that a model finishes classification, positioning and segmentation of defects end to end, and a prerequisite is created for realizing ADR. The invention is based on the assumption that the contributions of the features of different depths to semantic segmentation should be different, and the features of different depths are weighted and averaged, wherein the higher weight represents the larger contribution to segmentation, and meanwhile, the weight of each depth is not artificially predefined but automatically learned through back propagation, so that the complex and inefficient hyper-parameter adjustment is avoided. The invention spontaneously selects the optimal receptive field required by the image in a data driving mode through the self-adaptive receptive field selection module so as to achieve the capability of adapting to the defect scale change.
Description
Technical Field
The invention belongs to the field of automatic identification and segmentation of casting defects, and particularly relates to a semantic segmentation method for casting defects based on feature adaptive selection.
Background
In order to ensure the safety of the castings of interest, they need to be subjected to appropriate non-destructive inspection, such as radiographic inspection, to identify internal, visually undetectable defects. Typical internal defects of the casting are pores, slag inclusions, porosity, shrinkage cavities, cracks, pinholes and the like. The maturation of DR (digital ray) detection technology allows for the implementation of ADR (automatic defect recognition) systems. A complete ADR system aims at automatic identification, localization and area statistics of defects in images. Therefore, the precise segmentation of the defects in the image is realized, and the acquisition of the area information of the defects is a necessary condition of a mature ADR system.
At present, the automatic detection aiming at the casting radiographic image mainly comprises the following two methods:
(1) method for detecting window based on artificial design features and sliding window
The method first trains a classifier (ANN, SVM, etc.) based on artificially designed features (HOG, LBP, etc.), then slides on the original image using windows of different sizes, and performs pattern classification on each window, thereby roughly determining the location of the defect. Although the method has a simple principle, is easy to implement, has low speed and is limited by the prejudice of artificial design characteristics, so that the identification capability is low.
(2) Target detection method based on deep learning
The classification and positioning tasks of the defects are finished end to end by utilizing the strong feature extraction and pattern recognition capabilities of deep learning, and the better effect is achieved compared with the traditional method, but the method can only obtain the minimum bounding rectangle of the defects and cannot obtain the defect area.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a casting defect semantic segmentation method based on feature adaptive selection, which solves the problems of small difference and large scale change between casting defect images by utilizing an adaptive depth feature fusion mechanism and an adaptive receptive field selection module provided by the invention, enables a model to finish defect classification, positioning and segmentation end to end and creates a prerequisite for ADR realization.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
a casting defect semantic segmentation method based on feature adaptive selection specifically comprises the following steps:
(1) constructing a data set for casting semantic segmentation;
(2) extracting features by using a pre-trained feature extractor;
(3) building a self-adaptive depth feature fusion mechanism;
(4) building a self-adaptive receptive field selection module, which comprises multi-scale feature acquisition and self-adaptive scale selection;
(5) building a decoder and a loss function;
(6) training a semantic segmentation model;
(7) and (3) deducing: after the training is finished, any original radiographic image is input into the trained semantic segmentation model, and the model can output a corresponding segmentation image.
The step (1) of constructing the data set for casting semantic segmentation specifically comprises the following steps: based on industrial DR detection equipment, original radiographic images are collected, pixel-level defect labeling is carried out on each radiographic image, corresponding defect semantic segmentation labeling images are formed, and different gray values in the labeling images represent different types of defects.
The feature extractor in the step (2) is a ResNet network, an AlexNet network, a Vgg network, a DenseNet network or an Xconvergence network. The present invention uses ResNet pre-trained at ImageNet as a feature extractor. The capability of better feature expression can be obtained by transferring the pre-trained model to a casting defect detection task. The invention selects only the feature extraction part without using the part behind the global pooling layer.
The adaptive depth feature fusion mechanism of step (3) has four branches connected to the feature extractor, each branch being connected to a different depth of the pre-trained ResNet18, and when a cast ray image is input to the pre-trained Resnet18, features { F } extracted from it at different depths are obtained1',...,F4For the first two branches, the mechanism uses a convolution operation of 3 × 3 to down-sample the large-size feature map, and for the last two branches, uses a deconvolution operation of 3 × 3For the purpose of upsampling, so that the sizes of multiple depth features can be unified, the processed feature is defined as { F }1,...,F4}. Finally, realizing feature fusion of each depth by using pixel-by-pixel addition with weight; the weight parameters of each branch are obtained by back propagation automatic learning, and the forward propagation function of the fusion process is as follows:
wherein the processed features are defined as { F }1,...,F4}。
The multi-scale feature acquisition in the step (4) comprises a three-branch structure, wherein the first branch is composed of standard convolution of 1 × 1 and cavity convolution of 3 × 3 with a cavity rate of 1, the second branch is composed of standard convolution of 3 × 3 and cavity convolution of 3 × 3 with a cavity rate of 3, the third branch is composed of standard convolution of 5 × 5 and cavity convolution of 3 × 3 with a cavity rate of 5, and feature graphs output by the three branches are S respectively1,S2,S3
The self-adaptive scale selection in the step (4) firstly utilizes a global average pooling GAP to extract an input feature graphGlobal feature of (2)Then passes through two full connection layers fc1,2And sigmoid activation function, obtaining weight vector of each branch with size of 1 × 1 × 3 and value of 0 to 1Setting the position of the maximum value in the vector as 1, setting the rest positions as 0, and converting gamma into one-hot code β, and formulating the process as β ═ argmax (gamma) ═ argmax ((fc ═ argmax) ((gamma))2(fc1(GAP(I)))))。
In the step (4), a low-temperature softmax function is used to approximate an argmax function, and the argmax function is non-derivable and cannot participate in a back propagation process, and the formula is as follows:
wherein, omega temperature coefficient, β and characteristic diagram S of three branches obtained in multi-scale acquisition stage1,S2,S3And (3) weighting and summing to complete selection of the optimal receptive field:
the step (5) is specifically as follows: after the feature maps obtained by the adaptive depth feature fusion mechanism and the adaptive receptive field selection module are spliced along the channel direction, a 3 x 3 convolution layer is used for adjusting the feature, and then the feature maps are restored to the original image size by using 3 x 3 deconvolution. The present invention uses pixel-level multi-class cross entropy as a loss function.
The step (6) is specifically as follows: after the model is built, a semantic segmentation data set is used for training, after an image is input each time, a segmentation result is obtained through network forward propagation, the segmentation result and a semantic segmentation label graph are subjected to cross entropy function solution pixel by pixel, parameters in each convolutional layer of the model are optimized through a back propagation algorithm, the steps are repeated until loss function values do not decrease any more, the model is converged, and parameter values in the convolutional layers are fixed.
The invention has the beneficial effects that:
1. the invention provides a self-adaptive depth feature fusion mechanism, and as is known, features extracted at different depths of a depth convolution neural network are different, shallow features are usually low-level information such as gray scale, edge and the like, and deep features are usually abstract semantic features. The common method is to simply splice or directly add the features of different depths along a channel, the invention is based on the assumption that the contributions of the features of different depths to semantic segmentation should be different, the features of different depths are weighted and averaged, the higher weight represents the greater contribution to the segmentation, and meanwhile, the weight of each depth is not artificially predefined but is automatically learned through back propagation, thereby avoiding complex and inefficient hyper-parameter adjustment.
2. The invention provides a self-adaptive receptive field selection module, and because the scale change of defects is large, the common method is to respectively obtain the multi-scale characteristics of images by using convolution branches of different receptive fields. The invention spontaneously selects the optimal receptive field required by the image in a data driving mode through the self-adaptive receptive field selection module so as to achieve the capability of adapting to the defect scale change.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic illustration of a portion of a data set of the present invention;
FIG. 2 is a schematic diagram of an adaptive receptive field selection module according to the present invention;
FIG. 3 is a diagram of the overall network architecture of the present invention;
FIG. 4 is a diagram illustrating a semantic segmentation result versus manual labeling according to the present invention;
wherein a the first row represents the original radiograph, b the second row represents the artificially labeled result map, and c the third row represents the semantic segmentation result map of the present invention.
Detailed Description
Example 1
The invention discloses a casting defect semantic segmentation method based on feature adaptive selection, which comprises the following steps of:
(1) constructing a semantic segmentation data set for the casting: based on industrial DR detection equipment, a certain number of original radiographic images (600 radiographic images in the invention) are collected, pixel-level defect labeling is carried out on each radiographic image to form a corresponding defect semantic segmentation labeling image, and different gray values in the labeling image represent different types of defects. See in particular fig. 1.
(2) And (3) performing feature extraction by using a pre-trained ResNet network: the invention uses ResNet pre-trained in ImageNet as a feature extractor, and can select networks such as AlexNet, Vgg, DenseNet, Xception and the like besides ResNet. The capability of better feature expression can be obtained by transferring the pre-trained model to a casting defect detection task. The invention selects only the feature extraction part without using the part behind the global pooling layer. (3) Building a self-adaptive depth feature fusion mechanism: the adaptive depth feature fusion mechanism has four branches, each connected to a different depth of the pretrained ResNet 18. When a radiographic image of the casting is input into the pretrained Resnet18, the features F extracted therefrom at different depths are obtained1',...,F4For the first two branches, the mechanism uses convolution operation of 3 × 3 to sample the large-size feature map, and for the last two branches, uses deconvolution operation of 3 × 3 to achieve the purpose of upsampling, so that the sizes of multiple depth features can be unified, and the processed features are defined as { F }1,...,F4}. Finally, feature fusion of each depth is realized by pixel-by-pixel addition with weight, weight parameters of each branch are obtained by back propagation automatic learning, and a forward propagation function of a fusion process is as follows:
(4) the method comprises the steps of building an adaptive receptive field selection module, wherein the adaptive receptive field selection module comprises two parts, namely multi-scale feature acquisition and adaptive scale selection, the multi-scale feature acquisition is a three-branch structure, the first branch comprises a standard convolution of 1 × 1 and a hole convolution of 3 × 3, the hole rate is 1, the second branch comprises a standard convolution of 3 × 3 and a hole convolution of 3 × 3, the hole rate is 3, the third branch comprises a standard convolution of 5 × 5 and a hole convolution of 3 × 3, the hole rate is 5, and the adaptive scale selection firstly utilizes global average pooled GAP extraction outputCharacteristic diagramGlobal feature of (2)Then passes through two full connection layers fc1,2And sigmoid activation function, obtaining weight vector of each branch with size of 1 × 1 × 3 and value of 0 to 1Setting the position of the maximum value in the vector to be 1, and setting the rest positions to be 0. gamma. is converted into a one-hot code β. the process is formulated as follows:
β=argmax(γ)=argmax((fc2(fc1(GAP(I))))) (2)
in practice, the invention approximates the argmax function with a low temperature softmax function, since the argmax function is not derivable and cannot participate in the back propagation process.
Wherein the temperature coefficient of omega is lower, the lower omega is, β is closer to one-hot coding, β is finally combined with the feature map { S ] of three branches obtained in the multi-scale acquisition stage1,S2,S3And weighting and summing to complete the selection of the optimal receptive field. A schematic diagram of the adaptive receptor field module is shown in fig. 2.
(5) Building a decoder and a loss function: generally, the semantic segmentation model is a structure that is encoded and then decoded, and the above part can be regarded as an encoder part, and after the feature maps obtained from the adaptive depth feature fusion mechanism and the adaptive receptive field selection module are spliced along the channel direction, a 3 × 3 convolution layer is used to adjust the feature, and then a 3 × 3 deconvolution is used to restore the feature maps to the original image size. The present invention uses pixel-level multi-class cross entropy as a loss function. The overall model is shown in FIG. 3.
(6) Training a semantic segmentation model: after the model is built, a semantic segmentation data set is used for training, after one image is input each time, a segmentation result is obtained through network forward propagation, the segmentation result and a semantic segmentation label graph are subjected to cross entropy function solution pixel by pixel, and parameters in each convolution layer of the model are optimized through a back propagation algorithm. Repeating the above steps until the loss function value is not reduced, the model is converged, and the parameter value in the convolutional layer is fixed.
(7) And (3) deducing: after the training is finished, any original radiographic image is input into the trained semantic segmentation model, and the model can output a corresponding segmentation image. As shown in fig. 4.
Claims (9)
1. A casting defect semantic segmentation method based on feature adaptive selection is characterized by comprising the following steps: the casting defect semantic segmentation method based on the feature adaptive selection specifically comprises the following steps:
(1) constructing a data set for casting semantic segmentation;
(2) extracting features by using a pre-trained feature extractor;
(3) building a self-adaptive depth feature fusion mechanism;
(4) building a self-adaptive receptive field selection module, which comprises multi-scale feature acquisition and self-adaptive scale selection;
(5) building a decoder and a loss function;
(6) training a semantic segmentation model;
(7) and (3) deducing: after the training is finished, any original radiographic image is input into the trained semantic segmentation model, and the model can output a corresponding segmentation image.
2. The casting defect semantic segmentation method based on the feature adaptive selection as claimed in claim 1, wherein the step (1) of constructing the data set for casting semantic segmentation is specifically as follows: and acquiring original radiographic images, and performing pixel-level defect labeling on each radiographic image to form corresponding defect semantic segmentation labeling images.
3. The casting defect semantic segmentation method based on the feature adaptive selection is characterized in that in the step (2), the feature extractor is a ResNet network, an AlexNet network, a Vgg network, a DenseNet network or an Xception network.
4. The casting defect semantic segmentation method based on feature adaptive selection as claimed in claim 1, characterized in that the step (3) adaptive depth feature fusion mechanism has four branches connected to a feature extractor, and feature fusion of each depth is achieved using weighted pixel-by-pixel addition; the weight parameters of each branch are obtained by back propagation automatic learning, and the forward propagation function of the fusion process is as follows:
wherein the processed features are defined as { F }1,...,F4}。
5. The casting defect semantic segmentation method based on the feature adaptive selection in the step (4) is characterized in that the multi-scale feature acquisition in the step (4) comprises a three-branch structure, wherein the first branch is composed of a standard convolution of 1 × 1 and a hole convolution of 3 × 3 and with a hole rate of 1, the second branch is composed of a standard convolution of 3 × 3 and a hole convolution of 3 × 3 and with a hole rate of 3, the third branch is composed of a standard convolution of 5 × 5 and a hole convolution of 3 × 3 and with a hole rate of 5, and feature maps output by the three branches are S respectively1,S2,S3。
6. The casting defect semantic segmentation method based on feature adaptive selection as claimed in claim 1, wherein the adaptive scaling in step (4) first extracts an input feature map by using a global average pooling GAPGlobal feature of (2)Then passes through two full connection layers fc1,2And sigmoid activation function, obtaining weight vector of each branch with size of 1 × 1 × 3 and value of 0 to 1Setting the position of the maximum value in the vector as 1, setting the rest positions as 0, and converting gamma into one-hot code β, and formulating the process as β ═ argmax (gamma) ═ argmax ((fc ═ argmax) ((gamma))2(fc1(GAP(I)))))。
7. The casting defect semantic segmentation method based on feature adaptive selection according to claim 1, wherein in the step (4), a low-temperature softmax function is used for approximating an argmax function, and the formula is as follows:
wherein, omega temperature coefficient, β and characteristic diagram S of three branches obtained in multi-scale acquisition stage1,S2,S3And (3) weighting and summing to complete selection of the optimal receptive field:
8. the casting defect semantic segmentation method based on the feature adaptive selection as claimed in claim 1, wherein the step (5) is specifically as follows: after the feature maps obtained by the adaptive depth feature fusion mechanism and the adaptive receptive field selection module are spliced along the channel direction, a 3 x 3 convolution layer is used for adjusting the feature, and then the feature maps are restored to the original image size by using 3 x 3 deconvolution. The present invention uses pixel-level multi-class cross entropy as a loss function.
9. The casting defect semantic segmentation method based on the feature adaptive selection as claimed in claim 1, wherein the step (6) is specifically as follows: training by using a semantic segmentation data set, obtaining a segmentation result through network forward propagation after inputting one image every time, solving a cross entropy function of the segmentation result and a semantic segmentation label graph pixel by pixel, optimizing parameters in each convolutional layer of the model by using a back propagation algorithm, and repeating the steps until a loss function value is not reduced, the model is converged, and parameter values in the convolutional layers are fixed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010473309.8A CN111652231B (en) | 2020-05-29 | 2020-05-29 | Casting defect semantic segmentation method based on feature self-adaptive selection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010473309.8A CN111652231B (en) | 2020-05-29 | 2020-05-29 | Casting defect semantic segmentation method based on feature self-adaptive selection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111652231A true CN111652231A (en) | 2020-09-11 |
CN111652231B CN111652231B (en) | 2023-05-30 |
Family
ID=72348689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010473309.8A Active CN111652231B (en) | 2020-05-29 | 2020-05-29 | Casting defect semantic segmentation method based on feature self-adaptive selection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111652231B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541908A (en) * | 2020-12-18 | 2021-03-23 | 广东工业大学 | Casting flash identification method based on machine vision and storage medium |
CN113034502A (en) * | 2021-05-26 | 2021-06-25 | 深圳市勘察研究院有限公司 | Drainage pipeline defect redundancy removing method |
CN113723281A (en) * | 2021-08-30 | 2021-11-30 | 重庆市地理信息和遥感应用中心 | High-resolution image classification method based on local adaptive scale ensemble learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109241972A (en) * | 2018-08-20 | 2019-01-18 | 电子科技大学 | Image, semantic dividing method based on deep learning |
CN110188817A (en) * | 2019-05-28 | 2019-08-30 | 厦门大学 | A kind of real-time high-performance street view image semantic segmentation method based on deep learning |
US20200020102A1 (en) * | 2017-04-14 | 2020-01-16 | Tusimple, Inc. | Method and device for semantic segmentation of image |
CN111104962A (en) * | 2019-11-05 | 2020-05-05 | 北京航空航天大学青岛研究院 | Semantic segmentation method and device for image, electronic equipment and readable storage medium |
-
2020
- 2020-05-29 CN CN202010473309.8A patent/CN111652231B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200020102A1 (en) * | 2017-04-14 | 2020-01-16 | Tusimple, Inc. | Method and device for semantic segmentation of image |
CN109241972A (en) * | 2018-08-20 | 2019-01-18 | 电子科技大学 | Image, semantic dividing method based on deep learning |
CN110188817A (en) * | 2019-05-28 | 2019-08-30 | 厦门大学 | A kind of real-time high-performance street view image semantic segmentation method based on deep learning |
CN111104962A (en) * | 2019-11-05 | 2020-05-05 | 北京航空航天大学青岛研究院 | Semantic segmentation method and device for image, electronic equipment and readable storage medium |
Non-Patent Citations (1)
Title |
---|
李轩 等: "基于卷积神经网络的图像分割算法", 《沈阳航空航天大学学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541908A (en) * | 2020-12-18 | 2021-03-23 | 广东工业大学 | Casting flash identification method based on machine vision and storage medium |
CN112541908B (en) * | 2020-12-18 | 2023-08-29 | 广东工业大学 | Casting flash recognition method based on machine vision and storage medium |
CN113034502A (en) * | 2021-05-26 | 2021-06-25 | 深圳市勘察研究院有限公司 | Drainage pipeline defect redundancy removing method |
CN113034502B (en) * | 2021-05-26 | 2021-08-24 | 深圳市勘察研究院有限公司 | Drainage pipeline defect redundancy removing method |
CN113723281A (en) * | 2021-08-30 | 2021-11-30 | 重庆市地理信息和遥感应用中心 | High-resolution image classification method based on local adaptive scale ensemble learning |
CN113723281B (en) * | 2021-08-30 | 2022-07-22 | 重庆市地理信息和遥感应用中心 | High-resolution image classification method based on local adaptive scale ensemble learning |
Also Published As
Publication number | Publication date |
---|---|
CN111652231B (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108986050B (en) | Image and video enhancement method based on multi-branch convolutional neural network | |
WO2022252272A1 (en) | Transfer learning-based method for improved vgg16 network pig identity recognition | |
CN113436169B (en) | Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation | |
CN111652231A (en) | Casting defect semantic segmentation method based on feature adaptive selection | |
CN111950453A (en) | Optional-shape text recognition method based on selective attention mechanism | |
CN110363770B (en) | Training method and device for edge-guided infrared semantic segmentation model | |
CN112287941B (en) | License plate recognition method based on automatic character region perception | |
CN112233129A (en) | Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device | |
CN114648806A (en) | Multi-mechanism self-adaptive fundus image segmentation method | |
CN112396042A (en) | Real-time updated target detection method and system, and computer-readable storage medium | |
CN116129291A (en) | Unmanned aerial vehicle animal husbandry-oriented image target recognition method and device | |
CN113052215A (en) | Sonar image automatic target identification method based on neural network visualization | |
CN116993975A (en) | Panoramic camera semantic segmentation method based on deep learning unsupervised field adaptation | |
CN111340772A (en) | Reinforced concrete bridge damage detection system and method based on mobile terminal | |
CN113139431B (en) | Image saliency target detection method based on deep supervised learning | |
CN111291663B (en) | Method for quickly segmenting video target object by using space-time information | |
CN112270661B (en) | Rocket telemetry video-based space environment monitoring method | |
CN113280820A (en) | Orchard visual navigation path extraction method and system based on neural network | |
CN111950476A (en) | Deep learning-based automatic river channel ship identification method in complex environment | |
CN111612803A (en) | Vehicle image semantic segmentation method based on image definition | |
CN111881924A (en) | Dim light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement | |
CN111046861B (en) | Method for identifying infrared image, method for constructing identification model and application | |
CN111950409B (en) | Intelligent identification method and system for road marking line | |
CN114926456A (en) | Rail foreign matter detection method based on semi-automatic labeling and improved deep learning | |
CN113901944A (en) | Marine organism target detection method based on improved YOLO algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 110000 No.17, Yunfeng South Street, Tiexi District, Shenyang City, Liaoning Province Patentee after: Shenyang Foundry Research Institute Co., Ltd. of China National Machinery Research Institute Group Address before: 110000 No.17, Yunfeng South Street, Tiexi District, Shenyang City, Liaoning Province Patentee before: SHENYANG RESEARCH INSTITUTE OF FOUNDRY Co.,Ltd. |