CN117576007A - Method, device, equipment and storage medium for evaluating negative film image - Google Patents

Method, device, equipment and storage medium for evaluating negative film image Download PDF

Info

Publication number
CN117576007A
CN117576007A CN202311470987.9A CN202311470987A CN117576007A CN 117576007 A CN117576007 A CN 117576007A CN 202311470987 A CN202311470987 A CN 202311470987A CN 117576007 A CN117576007 A CN 117576007A
Authority
CN
China
Prior art keywords
negative
negative image
quality
defect
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311470987.9A
Other languages
Chinese (zh)
Inventor
林世昌
梁振均
罗杰
李天昊
刘顺
王金龙
杨杰
郭韵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Tianhe Zhongdian Power Engineering Technology Co ltd
Original Assignee
Suzhou Tianhe Zhongdian Power Engineering Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Tianhe Zhongdian Power Engineering Technology Co ltd filed Critical Suzhou Tianhe Zhongdian Power Engineering Technology Co ltd
Priority to CN202311470987.9A priority Critical patent/CN117576007A/en
Publication of CN117576007A publication Critical patent/CN117576007A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for evaluating a film image, wherein the method for evaluating the film image comprises the following steps: negative image quality and weld defect labeling are carried out on negative images of the training set, and a negative image data set is generated; optimizing network parameters of a quality assessment network structure based on the negative image dataset to generate a trained quality assessment model; training a defect evaluation network structure based on the negative image data set to generate a trained defect evaluation model; and identifying the negative film images of the test set based on the trained quality assessment model and the trained defect assessment model and comparing the negative film images with detection standards to generate an assessment result. By the assessment method of the embodiment, intelligent analysis of defect identification and negative image quality discrimination is integrated, and the identification result of the model on the negative image and the negative image detection standard of the recorded system are used for obtaining an intelligent analysis result, so that intelligent assessment on the radiographic negative image is realized.

Description

Method, device, equipment and storage medium for evaluating negative film image
Technical Field
The present invention relates to the field of radiographic film image evaluation technologies, and in particular, to a film image evaluation method, device, apparatus, and storage medium.
Background
The ray detection is used as a main nondestructive detection method, is still widely applied to the construction, pre-service and in-service inspection of an electric power unit, and is an important means for guaranteeing the quality of equipment and the operation safety of the unit. It is estimated that the power industry produces more than ten million radiographic film images per year.
The traditional film image analysis method comprises the steps of evaluating and judging the film image quality and defects, and is completed by a film evaluation person in a film viewing room manually by means of a film viewing lamp, a magnifying glass, a measuring ruler, a recording table and the like. By manually evaluating the film images, the evaluation speed is low, the time consumption is high, and the film image evaluation method is not suitable for the film evaluation work with short time and huge data volume.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for evaluating a negative image, so as to solve the problems in the related art, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for evaluating a film image, including:
negative image quality and weld defect labeling are carried out on negative images of the training set, and a negative image data set is generated;
optimizing network parameters of a quality assessment network structure based on the negative image dataset to generate a trained quality assessment model;
Training a defect evaluation network structure based on the negative image data set to generate a trained defect evaluation model;
and identifying the negative film image of the test set based on the trained quality assessment model and the trained defect assessment model, and comparing the negative film image with a detection standard to generate an assessment result.
In one embodiment, performing quality decision and defect identification on the film image based on the trained quality assessment model and the trained defect assessment model and comparing with detection criteria, generating an assessment result comprises:
identifying the negative film image of the test set based on the trained quality assessment model to generate a quality assessment result;
identifying the negative film image of the test set based on the trained defect evaluation model to generate a defect evaluation result;
and respectively comparing the quality evaluation result and the defect evaluation result with corresponding detection standards to generate an evaluation result.
In one embodiment, performing negative image quality and weld defect labeling on a negative image of a training set, generating a negative image dataset includes:
acquiring an initial negative image;
performing image enhancement expansion on the initial negative image to generate a negative image of the training set;
And (3) labeling the negative image of the training set with negative image quality and weld defects to generate a negative image data set.
In one embodiment, performing negative image quality and weld defect labeling on a negative image of a training set, generating a negative image dataset includes:
labeling the negative film images of the training set according to the weld defect type to generate an initial negative film image data set;
and labeling the original negative image data set with negative image quality to generate a negative image data set.
In one embodiment, labeling the original negative image dataset for negative image quality, generating the negative image dataset comprises:
and labeling the initial negative image training set according to the negative image quality type, so as to generate a negative image data set, wherein the negative image quality type comprises blackness failure, abnormal image quality, abnormal use of the filter plate and abnormal back scattering protection.
In one embodiment, the quality assessment network architecture employs a YOLOV3 network architecture.
In one embodiment, the defect review network architecture employs the YOLACT network architecture.
In a second aspect, an embodiment of the present application provides an apparatus for evaluating a film image, including:
The first generation module is used for carrying out negative image quality and weld defect labeling on negative images of the training set to generate a negative image data set;
the second generation module is used for optimizing network parameters of the quality evaluation network structure based on the negative image data set and generating a trained quality evaluation model;
the third generation module is used for training the defect evaluation network structure based on the negative image data set to generate a trained defect evaluation model;
and the fourth generation module is used for identifying the negative film image of the test set based on the trained quality assessment model and the trained defect assessment model and comparing the negative film image with the detection standard to generate an assessment result.
In a third aspect, an embodiment of the present application provides an electronic device, including: memory and a processor. Wherein the memory and the processor are in communication with each other via an internal connection, the memory is configured to store instructions, the processor is configured to execute the instructions stored by the memory, and when the processor executes the instructions stored by the memory, the processor is configured to perform the method of any one of the embodiments of the above aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, where the method in any one of the above embodiments is performed when the computer program is run on a computer.
The advantages or beneficial effects in the technical scheme at least comprise:
according to the technical scheme, negative image quality and weld defect marking are carried out on negative images of a training set, a negative image data set is generated, a quality evaluation network structure and a defect evaluation network structure are respectively trained through the negative image data set, a corresponding trained quality evaluation model and a corresponding trained defect evaluation model are generated, and negative images of a test set are identified and compared with detection standards based on the trained quality evaluation model and the trained defect evaluation model to generate an evaluation result. The intelligent analysis of defect identification and negative image quality discrimination is integrated through the assessment method of the embodiment, the identification result of the model to the negative image and the negative image detection standard of the recorded system are used for obtaining the intelligent analysis result, and the intelligent assessment of the radiographic negative image is realized.
The foregoing summary is for the purpose of the specification only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will become apparent by reference to the drawings and the following detailed description.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the disclosure and are not therefore to be considered limiting of its scope.
FIG. 1 is a flow chart of a method for evaluating a negative image according to an embodiment of the present application;
FIG. 2 is a block diagram of a method for evaluating a negative image according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of evaluating a negative image in accordance with another embodiment of the present application;
FIG. 4 shows a block diagram of a device for evaluating a film image according to an embodiment of the present invention;
fig. 5 shows a block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
In the related art, the conventional film image analysis method includes evaluating and judging film image quality and defects, which are manually performed by a film evaluation person in a film viewing room by means of film viewing lamps, magnifying glasses, measuring scales and recording tables. The following disadvantages exist: 1) The time and effort are consumed, the manual evaluation speed is low, the time consumption is high, and the defect images with huge data volume are not easy to detect; 2) The analysis result depends on the subjectivity of the person to a great extent, and the professionally trained analyst can also cause different analysis results due to the differences of personal experience and proficiency; 3) High-intensity analysis is prone to mental fatigue of the inspector, resulting in missed and false detection of defects.
In the film image digitizing technology in recent years, film evaluation personnel can utilize advanced computer and image processing technology to raise image contrast and assist in finishing film image quality and defect evaluation and judgment. Compared with the traditional film, the method still cannot overcome 1) time and labor are wasted, and the manual evaluation speed is low; 2) The problem of detecting personnel mental fatigue, defect missing detection, false detection and the like is caused under a high-strength working state, and an effective, rapid and intelligent automatic film image evaluation system is needed to assist manual film evaluation.
With the development of computer technology, especially the rapid development of the artificial intelligence field in the last ten years, deep learning models like YOLO and ResNet appear under big data training, and the accuracy rate in the fields of image classification, target detection and the like is close to or even exceeds the human level, especially the working efficiency is far beyond that of human, and the method is more suitable for work with huge data volume. The deep learning technology is applied to automatically evaluate the radiographic film image, and the automatic identification and analysis of the weld defects are the hot spot and the direction of the development of the radiographic detection technology. The existing method has lower defect identification detection rate and accuracy rate of the radiographic film image, has single radiographic film image quality assessment mark (image quality meter and filter plate) identification method, and has no integrated model of radiographic film image defect detection and film image quality assessment.
FIG. 1 illustrates a flow chart of a method of assessing a negative image in accordance with an embodiment of the present application. As shown in fig. 1 and 2, a method for evaluating a film image may include:
s110: negative image quality and weld defect labeling are carried out on negative images of the training set, and a negative image data set is generated;
s120: optimizing network parameters of a quality assessment network structure based on the negative image dataset to generate a trained quality assessment model;
S130: training a defect evaluation network structure based on the negative image data set to generate a trained defect evaluation model;
s140: and identifying the negative film images of the test set based on the trained quality assessment model and the trained defect assessment model and comparing the negative film images with detection standards to generate an assessment result.
In this embodiment, the method of this embodiment may be executed on a cloud server or on a PC host. The method of the embodiment may be executed on the configured film image scoring system, may be executed on a PC device, etc.
According to the technical scheme, negative image quality and weld defect marking are carried out on negative images of a training set, a negative image data set is generated, a quality evaluation network structure and a defect evaluation network structure are respectively trained through the negative image data set, a corresponding trained quality evaluation model and a corresponding trained defect evaluation model are generated, quality judgment and defect recognition are carried out on negative images of a test set based on the trained quality evaluation model and the trained defect evaluation model, and evaluation results are generated by comparing the negative images with detection standards. The intelligent analysis of defect identification and negative image quality discrimination is integrated through the assessment method of the embodiment, the identification result of the model to the negative image and the negative image detection standard of the recorded system are used for obtaining the intelligent analysis result, and the intelligent assessment of the radiographic negative image is realized.
In step S110, negative image quality and weld defect labeling are performed on the negative images of the training set, resulting in a negative image dataset.
In this embodiment, a batch of radiographic film images for training set is obtained first, and the radiographic film images are classified into two categories, namely film quality problem and weld defect according to intelligent evaluation requirements of film images, and specific classification contents refer to table 1:
TABLE 1 defect dataset detailed information
The ray film images for the training set are subjected to defect marking by adopting data marking software, defect information of each film image is recorded, no extra information is introduced into a marking frame by adopting an irregular graph marking mode, and a model defect identification area trained by using the data is realized.
The information to be marked in the whole film image is divided into two types, namely, the weld defect is marked and the film quality is marked.
When the defects of the bottom plate weld joint are marked, firstly classifying the weld joint defects according to detection standards, and dividing the defects into the types of defects such as incomplete welding, incomplete fusion, cracks, slag inclusion, air holes, tungsten inclusion, undercut, poor molding and the like;
defect labeling process:
1. establishing a labeling expert group, determining defect classification, selecting a corresponding negative image for each type of defect, labeling the negative image, and taking the negative image as an example labeling standard;
2. Training labeling personnel, and teaching the selected examples by expert groups to point out the characteristics of various defects, wherein the information needs to be focused in the labeling process;
3. marking and examining the defects, marking the defects by marking personnel, randomly spot checking the finished images by an expert group in the marking process, and returning and re-marking the unqualified images;
4. labeling result statistics: and after marking, counting the quantity and type distribution of various defects.
Marking weld defects: labeling according to the type of the defect.
And marking the radiographic film image for the initial training set according to the defect type to obtain the training set film image marked with the defect type.
And marking the weld defects, and then marking the quality problems of the negative films, or marking the quality problems of the negative films, and then marking the weld defects.
Labeling the quality problem of the negative film, and according to the difference of representative problems, classifying the negative film into abnormal use of a filter plate (a filter plate hole), abnormal back scattering protection (a large 'B' image) and image quality meter labeling (a text part and an image quality meter wire part);
and (3) labeling a filter plate hole: labeling all the filter plates in the image, and selecting a rectangular frame during labeling;
Backscattering protection (large "B" image): marking the character B appearing in the image, and selecting a rectangular frame during marking;
marking an image quality meter: the image quality meter label is divided into two parts: 1. marking the image quality meter information, namely, information which type of the image quality meter belongs to is described; 2. the image quality meter in the image can identify the filament number label, namely the image quality filament number which can be identified by human eyes in the image.
Negative image quality labeling process:
1. the expert group determines the labeling standard: finding out and marking example images of various markers, and explaining the content and details to be focused on by marking;
2. labeling and checking: the negative film image finished by the labeling personnel enters a public database, and the expert group members randomly spot check and return the image with the labeling error;
3. image quality counting marking: because the image quality yarn counting marking has higher requirements on marking personnel and higher marking difficulty, the image quality yarn counting marking part needs to be completely checked.
The method can be used for labeling the negative image quality and weld defects of the negative image of the training set, so as to generate a negative image data set.
In step S120, network parameters of the quality assessment network structure are optimized based on the negative image dataset, resulting in a trained quality assessment model.
In this embodiment, the quality evaluation network structure adopts a YOLOV3 network structure, and the construction of the identifier discrimination model includes:
model selection: the identification model rapidly identifies various corresponding markers in the image, so that a target detection class algorithm is used, and a YOLOV3 model is adopted.
The network structure of the YOLOV3 model can be seen as three parts: dark 53, FPN, yolo Head.
The network of YOLOV3 includes an image feature extraction structure (dark net 53), a Feature Pyramid Network (FPN) that generates feature maps of different sizes, a classifier, and a regressor (Yolo Head).
The dark 53 is a backbone network of YOLOV3, and is used for extracting image features, the input image size is 416×416×3, and the image is subsequently passed through a convolution layer and a residual module (a residual_body part), where a corresponding feature map can be generated after each pass through the residual module;
the FPN is a feature pyramid network for generating feature graphs with different sizes, and the structure mainly has the function of establishing connection between feature graphs with different levels, so that the network can simultaneously utilize low-level fine features and high-level semantic features to perform target detection;
yolo Head is the classifier and regressor of YOLOV 3. Three enhanced feature layers can be obtained by dark 53 and FPN. Their shapes are respectively: (52, 52, 128), (26, 26, 256), (13, 13, 512).
The loss function is divided into: the Loss function is a weighted sum of all the partial losses, and the proportion of the target detection Loss and the target frame regression Loss is improved in the model.
Flow of images in model:
1. adjusting the image size to 416×416×3, which respectively represents the length, width and channel number of the image;
2. after the image is processed by a Darknet53 network, a corresponding characteristic diagram is obtained by each convolution module, wherein the corresponding sizes of D3, D4 and D5 are (52, 52, 128), (26, 26, 256), (13, 13 and 512) respectively, and the corresponding sizes are used as the input of the FPN network;
the D5 layer characteristic diagram is taken as F1 through 1 convolution layer, the characteristic diagram is doubled by adopting bilinear interpolation through F1, F2 is obtained by adding the characteristic diagram with the convolved D4, and F3 is obtained by the same method;
the Yolo Head part generates corresponding category confidence level, position deviation and mask confidence level for the input F1, F2 and F3 respectively;
5. removing the background frame and the repeated frame by using nms and iou;
6. a prediction box is obtained and visualized.
The method realizes optimizing network parameters of the quality assessment network structure based on the negative image data set, and generates a trained quality assessment model.
S130: training a defect evaluation network structure based on the negative image data set to generate a trained defect evaluation model;
in this embodiment, the defect review network architecture employs the YOLACT network architecture.
The defect identification model construction key points comprise:
a. model selection: setting a target to realize pixel-level identification of weld defects, so that an algorithm of image segmentation (instead of target detection) is needed, and comparing differences of various algorithms in accuracy and reaction time to determine application of a YOLACT network architecture;
yolact network consists of: extracting depth residual networks of feature maps, generating feature pyramid networks (Feature Pyramid Networks, FPN) of different feature maps, predicting heads, loss functions, etc.
Depth residual network for extracting feature map: the common depth residual network ResNet-101 is adopted to extract feature graphs, wherein the total number of convolution modules in the network is 5 from conv1, conv2 to conv5, and the sizes of the feature graphs are 138×138×64, 138×138×256, 69×69×512, 35×35×1024 and 18×18×2048 respectively;
generating a feature pyramid network (Feature Pyramid Networks, FPN) of different feature graphs, wherein the structure mainly has the function of establishing connection between feature graphs of different levels, so that the network can simultaneously utilize low-level fine features and high-level semantic features to perform target detection and segmentation;
A Prediction Head (Prediction Head) which is responsible for generating detection frames with different sizes on feature maps with different scales generated by the FPN, predicting the category and the segmentation mask of the targets, and effectively processing the targets with different sizes;
loss function: the Mask IoU loss is applied to train an instance segmentation task, the accuracy is measured by calculating IoU (cross-over ratio) between a predicted instance segmentation Mask and a real segmentation Mask, and the accuracy is used as a part of a loss function, so that the instance segmentation precision is improved;
other parts: in practical application, fast-NMS is used, and screening time is reduced on the premise of ensuring accuracy.
Flow of images in model:
1. the input marked image size of the ResNet-101 network is adjusted to 224 x 224;
2. the images pass through different convolution modules in the network to realize image size transformation, wherein the feature image size corresponding to conv5 is 18 multiplied by 2048;
3. the feature map size corresponding to conv5 is processed by 1 convolution layer, the feature map size is doubled by adopting bilinear interpolation as P5 and P5 in the FPN network, the feature map size is added with convolved C4 to obtain P4, and then P3 can be obtained by adopting the same method;
4, carrying out convolution and downsampling on the P5 to obtain P6, carrying out the same convolution and downsampling on the P6 to obtain P7, and establishing an FPN network;
P3 is fed into Protone, and P3-P7 are simultaneously fed into the Prediction Head.
6.Prediction Head part of the feature map generates anchors, 3 anchors are generated for each pixel point, and the ratio is 1:1, 1:2 and 2:1. The anchor base side lengths of the five feature maps are 24, 48, 96, 192, and 384, respectively. The basic side length is adjusted according to different proportions, so that the areas of the anchors are equal;
7.Prediction Head outputs are: the category confidence, the position offset and the mask confidence corresponding to the P3-P7 feature map are respectively accumulated, namely a=a3+a4+a5+a6+a7;
the NMS part is a screening algorithm, and the ROI obtained based on the output of the last step is screened;
9. matrix multiplication is carried out on a mask coefficient obtained by a Prediction Head branch and a prototype mask obtained by a Protonet branch, so that the mask of each target object in the image is obtained;
10. the mask outside the boundary of the loop part in the network is cleared, the boundary used in the training process is the image annotation content marking box, and the boundary of the evaluation stage is the predicted marking box.
The method realizes training of the defect evaluation network structure based on the negative image data set, and generates a trained defect evaluation model.
In step S140, the negative images of the test set are identified and compared with the detection criteria based on the trained quality assessment model and the trained defect assessment model, generating an assessment result.
In this embodiment, a trained quality assessment model and a trained defect assessment model are determined by the method of the embodiment, negative film images of a test set to be tested are respectively input into the trained quality assessment model and the trained defect assessment model, and quality of the negative film images to be tested is assessed by the trained quality assessment model to obtain a quality assessment result; and evaluating the weld defects of the negative film image to be tested through the trained defect evaluation model to obtain a defect evaluation result. The quality assessment result and the defect assessment result are input into the intelligent assessment system, namely the quality assessment result is compared with the corresponding quality detection standard, the defect assessment result is compared with the corresponding defect detection standard, the intelligent analysis result is obtained by integrating the two assessment results, the intelligent assessment of the radiographic film image is realized, the quality of the radiographic film image and the problem of welding defects can be effectively obtained, and the overall condition of the radiographic film image is determined.
In one embodiment, identifying the negative image based on the trained quality assessment model and the trained defect assessment model and comparing to the detection criteria, generating the assessment result comprises:
Identifying the negative film image of the test set based on the trained quality assessment model to generate a quality assessment result;
identifying the negative film image of the test set based on the trained defect evaluation model to generate a defect evaluation result;
and respectively comparing the quality evaluation result and the defect evaluation result with corresponding detection standards to generate an evaluation result.
In this embodiment, a trained quality assessment model and a trained defect assessment model are determined by the method of the embodiment, negative film images of a test set to be tested are respectively input into the trained quality assessment model and the trained defect assessment model, and quality of the negative film images to be tested is assessed by the trained quality assessment model to obtain a quality assessment result; and evaluating the weld defects of the negative film image to be tested through the trained defect evaluation model to obtain a defect evaluation result. The quality assessment result and the defect assessment result are input into the intelligent assessment system, namely the quality assessment result is compared with the corresponding quality detection standard, the defect assessment result is compared with the corresponding defect detection standard to obtain the intelligent analysis result, the intelligent assessment of the radiographic film image is realized, the quality of the radiographic film image and the problem of welding defects can be effectively obtained, and the overall condition of the radiographic film image is determined.
In one embodiment, performing negative image quality and weld defect labeling on a negative image of a training set, generating a negative image dataset includes:
acquiring an initial negative image;
performing image enhancement expansion on the initial negative image to generate a negative image of the training set;
and (3) labeling the negative image of the training set with negative image quality and weld defects to generate a negative image data set.
In this embodiment, the collected initial film image is sorted and analyzed, specific numbers and distribution ratios of various samples and various defects in the data are listed, and the image enhancement is used for the analogy with smaller data amount, so that the image ratio for model training is increased, the number of training negative samples is increased, and the balanced training requirements of the quality evaluation network structure and the defect evaluation network structure are met.
The image enhancement method used is:
1. the size of the initial negative image is changed, and the common methods such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, lanczos interpolation or bit-accurate bilinear interpolation are adopted to change the length and width of the initial negative image, so as to realize the scaling of the initial negative image;
2. increasing noise of the original negative film image, wherein the noise comprises Gaussian noise, gaussian white noise, poisson noise, uniform noise, multiplicative noise or exponential noise and the like;
3. In order to increase the capability of the model to process special radiographic film images, the original film images are degraded, and the corrosion and expansion of the original film images are realized.
By the method, the initial film image can be tidied and analyzed to meet the balanced training requirements of the quality evaluation network structure and the defect evaluation network structure, so that a well-trained quality evaluation model and a well-trained defect evaluation model can be obtained more accurately.
In one embodiment, performing negative image quality and weld defect labeling on a negative image of a training set, generating a negative image dataset includes:
labeling the negative film images of the training set according to the weld defect type to generate an initial negative film image data set;
and labeling the original negative image data set with negative image quality to generate a negative image data set.
In this embodiment, the radiographic film images for the training set are subjected to defect labeling by using data labeling software, defect information of each film image is recorded, and no additional information is introduced into a labeling frame by adopting an irregular graph labeling mode, so that a model defect identification area trained by using the data is realized.
The information to be marked in the whole film image is divided into two types, namely, the weld defect is marked and the film quality is marked.
When the defects of the bottom plate weld joint are marked, firstly classifying the weld joint defects according to detection standards, and dividing the defects into the types of defects such as incomplete welding, incomplete fusion, cracks, slag inclusion, air holes, tungsten inclusion, undercut, poor molding and the like;
defect labeling process:
1. establishing a labeling expert group, determining defect classification, selecting a corresponding negative image for each type of defect, labeling the negative image, and taking the negative image as an example labeling standard;
2. training labeling personnel, and teaching the selected examples by expert groups to point out the characteristics of various defects, wherein the information needs to be focused in the labeling process;
3. marking and examining the defects, marking the defects by marking personnel, randomly spot checking the finished images by an expert group in the marking process, and returning and re-marking the unqualified images;
4. labeling result statistics: and after marking, counting the quantity and amount distribution of various defects.
Marking weld defects: labeling according to the type of the defect.
And marking the radiographic film image for the initial training set according to the defect type to obtain the training set film image marked with the defect type.
In one embodiment, labeling the original negative image dataset for negative image quality, generating the negative image dataset comprises:
And labeling the initial negative image training set according to the negative image quality type, so as to generate a negative image data set, wherein the negative image quality type comprises blackness failure, abnormal image quality, abnormal use of the filter plate and abnormal back scattering protection.
And marking the weld defects, and then marking the quality problems of the negative films, or marking the quality problems of the negative films, and then marking the weld defects.
Labeling the quality problem of the negative film, and according to the difference of representative problems, classifying the negative film into abnormal use of a filter plate (a filter plate hole), abnormal back scattering protection (a large 'B' image) and image quality meter labeling (a text part and an image quality meter wire part);
and (3) labeling a filter plate hole: labeling all the filter plates in the image, and selecting a rectangular frame during labeling;
backscattering protection (large "B" image): marking the character B appearing in the image, and selecting a rectangular frame during marking;
marking an image quality meter: the image quality meter label is divided into two parts: 1. marking the image quality meter information, namely, information which type of the image quality meter belongs to is described; 2. the image quality meter in the image can identify the filament number label, namely the image quality filament number which can be identified by human eyes in the image.
Negative image quality labeling process:
1. The expert group determines the labeling standard: finding out and marking example images of various markers, and explaining the content and details to be focused on by marking;
2. labeling and checking: the negative film image finished by the labeling personnel enters a public database, and the expert group members randomly spot check and return the image with the labeling error;
3. image quality counting marking: because the image quality yarn counting marking has higher requirements on marking personnel and higher marking difficulty, the image quality yarn counting marking part needs to be completely checked.
The method can be used for labeling the negative image quality and weld defects of the negative image of the training set, so as to generate a negative image data set.
Ray film image training set database: 10 ten thousand negative film marking information data are constructed, the marking information is divided into weld defect information and negative film quality information according to types, and a data base is provided for industry model training.
In one embodiment, the quality assessment network architecture employs a YOLOV3 network architecture.
In this embodiment, the quality evaluation network structure adopts a YOLOV3 network structure, and the construction of the identifier discrimination model includes:
model selection: the identification model rapidly identifies various corresponding markers in the image, so that a target detection class algorithm is used, and a YOLOV3 model is adopted.
The network structure of the YOLOV3 model can be seen as three parts: dark 53, FPN, yolo Head.
The network of YOLOV3 includes an image feature extraction structure (dark net 53), a Feature Pyramid Network (FPN) that generates feature maps of different sizes, a classifier, and a regressor (Yolo Head).
The dark 53 is a backbone network of YOLOV3, and is used for extracting image features, the input image size is 416×416×3, and the image is subsequently passed through a convolution layer and a residual module (a residual_body part), where a corresponding feature map can be generated after each pass through the residual module;
the FPN is a feature pyramid network for generating feature graphs with different sizes, and the structure mainly has the function of establishing connection between feature graphs with different levels, so that the network can simultaneously utilize low-level fine features and high-level semantic features to perform target detection;
yolo Head is the classifier and regressor of YOLOV 3. Three enhanced feature layers can be obtained by dark 53 and FPN. Their shapes are respectively: (52, 52, 128), (26, 26, 256), (13, 13, 512).
The loss function is divided into: the Loss function is a weighted sum of all the partial losses, and the proportion of the target detection Loss and the target frame regression Loss is improved in the model.
Flow of images in model:
1. adjusting the image size to 416×416×3, which respectively represents the length, width and channel number of the image;
2. after the image is processed by a Darknet53 network, a corresponding characteristic diagram is obtained by each convolution module, wherein the corresponding sizes of D3, D4 and D5 are (52, 52, 128), (26, 26, 256), (13, 13 and 512) respectively, and the corresponding sizes are used as the input of the FPN network;
the D5 layer characteristic diagram is taken as F1 through 1 convolution layer, the characteristic diagram is doubled by adopting bilinear interpolation through F1, F2 is obtained by adding the characteristic diagram with the convolved D4, and F3 is obtained by the same method;
the Yolo Head part generates corresponding category confidence level, position deviation and mask confidence level for the input F1, F2 and F3 respectively;
5. removing the background frame and the repeated frame by using nms and iou;
6. a prediction box is obtained and visualized.
The method realizes optimizing network parameters of the quality assessment network structure based on the negative image data set, and generates a trained quality assessment model.
The quality evaluation network structure and the trained quality evaluation model are based on YoloV3 to construct a negative quality analysis model, and specifically comprise contents such as a quality meter, a filter plate, back scattering and the like in the negative, and no similar comprehensive negative quality judgment model exists in the current negative identification field;
In one embodiment, the defect review network architecture employs the YOLACT network architecture.
In this embodiment, the defect review network architecture employs the YOLACT network architecture.
The defect identification model construction key points comprise:
a. model selection: setting a target to realize pixel-level identification of weld defects, so that an algorithm of image segmentation (instead of target detection) is needed, and comparing differences of various algorithms in accuracy and reaction time to determine application of a YOLACT network architecture;
yolact network consists of: extracting depth residual networks of feature maps, generating feature pyramid networks (Feature Pyramid Networks, FPN) of different feature maps, predicting heads, loss functions, etc.
Depth residual network for extracting feature map: the common depth residual network ResNet-101 is adopted to extract feature graphs, wherein the total number of convolution modules in the network is 5 from conv1, conv2 to conv5, and the sizes of the feature graphs are 138×138×64, 138×138×256, 69×69×512, 35×35×1024 and 18×18×2048 respectively;
generating a feature pyramid network (Feature Pyramid Networks, FPN) of different feature graphs, wherein the structure mainly has the function of establishing connection between feature graphs of different levels, so that the network can simultaneously utilize low-level fine features and high-level semantic features to perform target detection and segmentation;
A Prediction Head (Prediction Head) which is responsible for generating detection frames with different sizes on feature maps with different scales generated by the FPN, predicting the category and the segmentation mask of the targets, and effectively processing the targets with different sizes;
loss function: the Mask IoU loss is applied to train an instance segmentation task, the accuracy is measured by calculating IoU (cross-over ratio) between a predicted instance segmentation Mask and a real segmentation Mask, and the accuracy is used as a part of a loss function, so that the instance segmentation precision is improved;
other parts: in practical application, fast-NMS is used, and screening time is reduced on the premise of ensuring accuracy.
Flow of images in model:
1. the input marked image size of the ResNet-101 network is adjusted to 224 x 224;
2. the images pass through different convolution modules in the network to realize image size transformation, wherein the feature image size corresponding to conv5 is 18 multiplied by 2048;
3. the feature map size corresponding to conv5 is processed by 1 convolution layer, the feature map size is doubled by adopting bilinear interpolation as P5 and P5 in the FPN network, the feature map size is added with convolved C4 to obtain P4, and then P3 can be obtained by adopting the same method;
4, carrying out convolution and downsampling on the P5 to obtain P6, carrying out the same convolution and downsampling on the P6 to obtain P7, and establishing an FPN network;
P3 is fed into Protone, and P3-P7 are simultaneously fed into the Prediction Head.
6.Prediction Head part of the feature map generates anchors, 3 anchors are generated for each pixel point, and the ratio is 1:1, 1:2 and 2:1. The anchor base side lengths of the five feature maps are 24, 48, 96, 192, and 384, respectively. The basic side length is adjusted according to different proportions, so that the areas of the anchors are equal;
7.Prediction Head outputs are: the category confidence, the position offset and the mask confidence corresponding to the P3-P7 feature map are respectively accumulated, namely a=a3+a4+a5+a6+a7;
the NMS part is a screening algorithm, and the ROI obtained based on the output of the last step is screened;
9. matrix multiplication is carried out on a mask coefficient obtained by a Prediction Head branch and a prototype mask obtained by a Protonet branch, so that the mask of each target object in the image is obtained;
10. the mask outside the boundary of the loop part in the network is cleared, the boundary used in the training process is the image annotation content marking box, and the boundary of the evaluation stage is the predicted marking box.
The method realizes training of the defect evaluation network structure based on the negative image data set, and generates a trained defect evaluation model.
Defect identification model: and by adopting a Yolcat structure, the example segmentation of the defects is realized, and the defect identification accuracy is high.
FIG. 4 shows a block diagram of a device for evaluating a film image according to an embodiment of the present invention. As shown in fig. 4, in a second aspect, an embodiment of the present application provides a device for evaluating a film image, including:
a first generating module 401, configured to perform negative image quality and weld defect labeling on a negative image of the training set, and generate a negative image dataset;
a second generation module 402, configured to optimize network parameters of the quality assessment network structure based on the film image dataset, and generate a trained quality assessment model;
a third generating module 403, configured to train the defect evaluation network structure based on the negative image dataset, and generate a trained defect evaluation model;
and a fourth generating module 404, configured to identify the negative image of the test set based on the trained quality assessment model and the trained defect assessment model, and compare the negative image with the detection standard to generate an assessment result.
According to the technical scheme, negative image quality and weld defect marking are carried out on negative images of a training set, a negative image data set is generated, a quality evaluation network structure and a defect evaluation network structure are respectively trained through the negative image data set, a corresponding trained quality evaluation model and a corresponding trained defect evaluation model are generated, and negative images of a test set are identified and compared with detection standards based on the trained quality evaluation model and the trained defect evaluation model to generate an evaluation result. The intelligent analysis of defect identification and negative image quality discrimination is integrated through the assessment method of the embodiment, the identification result of the model to the negative image and the negative image detection standard of the recorded system are used for obtaining the intelligent analysis result, and the intelligent assessment of the radiographic negative image is realized.
In one embodiment, the fourth generation module 404 includes:
the first generation unit is used for identifying the film images of the test set based on the trained quality assessment model and generating a quality assessment result;
the second generation unit is used for identifying the negative film images of the test set based on the trained defect evaluation model and generating a defect evaluation result;
and the third generation unit is used for comparing the quality evaluation result and the defect evaluation result with corresponding detection standards respectively to generate an evaluation result.
In one embodiment, the first generation module 401 includes:
a first acquisition unit configured to acquire an initial negative image;
the fourth generating unit is used for carrying out image enhancement expansion on the initial negative image to generate a negative image of the training set;
and the substrate generating unit is used for carrying out negative image quality and weld defect labeling on the negative images of the training set to generate a negative image data set.
In one embodiment, the substrate generating unit comprises:
the first generating subunit is used for marking the negative film image of the training set according to the weld defect type and generating an initial negative film image data set;
and the second generation subunit is used for carrying out negative image quality labeling on the initial negative image data set to generate a negative image data set.
In one embodiment, the second generation subunit is further configured to:
and labeling the initial negative image training set according to the negative image quality type, so as to generate a negative image data set, wherein the negative image quality type comprises blackness failure, abnormal image quality, abnormal use of the filter plate and abnormal back scattering protection.
In one embodiment, the quality assessment network architecture employs a YOLOV3 network architecture.
In one embodiment, the defect review network architecture employs the YOLACT network architecture.
The functions of each module in each device of the embodiments of the present invention may be referred to the corresponding descriptions in the above methods, and are not described herein again.
Fig. 5 shows a block diagram of an electronic device according to an embodiment of the invention. As shown in fig. 5, the electronic device includes: memory 910 and processor 920, memory 910 stores a computer program executable on processor 920. The processor 920, when executing the computer program, implements the method for evaluating a film image in the above embodiment. The number of memories 910 and processors 920 may be one or more.
The electronic device/terminal/server further includes:
and the communication interface 930 is used for communicating with external equipment and carrying out data interaction transmission.
If the memory 910, the processor 920, and the communication interface 930 are implemented independently, the memory 910, the processor 920, and the communication interface 930 may be connected to each other and perform communication with each other through buses. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 910, the processor 920, and the communication interface 930 are integrated on a chip, the memory 910, the processor 920, and the communication interface 930 may communicate with each other through internal interfaces.
The embodiment of the invention provides a computer readable storage medium storing a computer program which, when executed by a processor, implements a method provided in the embodiment of the application.
The embodiment of the application also provides a chip, which comprises a processor and is used for calling the instructions stored in the memory from the memory and running the instructions stored in the memory, so that the communication device provided with the chip executes the method provided by the embodiment of the application.
The embodiment of the application also provides a chip, which comprises: the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the application embodiment.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processing, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (fieldprogrammablegate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (advanced RISC machines, ARM) architecture.
Further, optionally, the memory may include a read-only memory and a random access memory, and may further include a nonvolatile random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may include a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory, among others. Volatile memory can include random access memory (random access memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static RAM (SRAM), dynamic RAM (dynamic random access memory, DRAM), synchronous DRAM (SDRAM), double data rate synchronous DRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method description in a flowchart or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, and these should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of evaluating a film image, comprising:
negative image quality and weld defect labeling are carried out on negative images of the training set, and a negative image data set is generated;
optimizing network parameters of a quality assessment network structure based on the negative image dataset, and generating a trained quality assessment model;
optimizing network parameters of a defect evaluation network structure based on the negative image dataset, and generating a trained defect evaluation model;
and carrying out negative film quality assessment and defect identification on negative film images of the test set based on the trained quality assessment model and the trained defect assessment model, and comparing the negative film quality assessment and defect identification with detection standards to generate an assessment result.
2. The method of claim 1, wherein the identifying and comparing the negative images of the test set to the detection criteria based on the trained quality assessment model and the trained defect assessment model to generate an assessment result comprises:
Identifying the negative film image of the test set based on the trained quality assessment model to generate a quality assessment result;
identifying a negative image of the test set based on the trained defect evaluation model to generate a defect evaluation result;
and comparing the quality evaluation result and the defect evaluation result with corresponding detection standards respectively to generate an evaluation result.
3. The method of claim 1, wherein labeling the negative image of the training set for negative image quality and weld defects, generating a negative image dataset comprises:
acquiring an initial negative image;
performing image enhancement expansion on the initial negative image to generate a negative image of a training set;
and (3) labeling the negative image of the training set with negative image quality and weld defects to generate a negative image data set.
4. The method of claim 1, wherein labeling the negative image of the training set for negative image quality and weld defects, generating a negative image dataset comprises:
labeling the negative film images of the training set according to the weld defect type to generate an initial negative film image data set;
And labeling the original negative image data set with negative image quality to generate a negative image data set.
5. The method of claim 4, wherein labeling the initial negative image dataset for negative image quality, generating a negative image dataset comprises:
and labeling the initial negative image training set according to the negative image quality type, and generating a negative image data set, wherein the negative image quality type comprises blackness failure, abnormal image quality, abnormal use of the filter plate and abnormal back scattering protection.
6. The method of claim 1, wherein the quality assessment network is YOLOV3 network.
7. The method of claim 1, wherein the defect review network structure is a YOLACT network architecture.
8. A negative image assessment apparatus, comprising:
the first generation module is used for carrying out negative image quality and weld defect labeling on negative images of the training set to generate a negative image data set;
the second generation module is used for optimizing network parameters of the quality evaluation network structure based on the negative image data set and generating a trained quality evaluation model;
The third generation module is used for optimizing network parameters of the defect evaluation network structure based on the negative image data set and generating a trained defect evaluation model;
and the fourth generation module is used for identifying the negative film image of the test set based on the trained quality assessment model and the trained defect assessment model and comparing the negative film image with a detection standard to generate an assessment result.
9. An electronic device, comprising: a processor and a memory in which instructions are stored, the instructions being loaded and executed by the processor to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-7.
CN202311470987.9A 2023-11-07 2023-11-07 Method, device, equipment and storage medium for evaluating negative film image Pending CN117576007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311470987.9A CN117576007A (en) 2023-11-07 2023-11-07 Method, device, equipment and storage medium for evaluating negative film image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311470987.9A CN117576007A (en) 2023-11-07 2023-11-07 Method, device, equipment and storage medium for evaluating negative film image

Publications (1)

Publication Number Publication Date
CN117576007A true CN117576007A (en) 2024-02-20

Family

ID=89889075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311470987.9A Pending CN117576007A (en) 2023-11-07 2023-11-07 Method, device, equipment and storage medium for evaluating negative film image

Country Status (1)

Country Link
CN (1) CN117576007A (en)

Similar Documents

Publication Publication Date Title
CN110570410B (en) Detection method for automatically identifying and detecting weld defects
JP6936957B2 (en) Inspection device, data generation device, data generation method and data generation program
CN110349145B (en) Defect detection method, defect detection device, electronic equipment and storage medium
JP2024509411A (en) Defect detection method, device and system
CN110097547B (en) Automatic detection method for welding seam negative film counterfeiting based on deep learning
CN111401419A (en) Improved RetinaNet-based employee dressing specification detection method
CN114372955A (en) Casting defect X-ray diagram automatic identification method based on improved neural network
CN112700442A (en) Die-cutting machine workpiece defect detection method and system based on Faster R-CNN
CN113222913B (en) Circuit board defect detection positioning method, device and storage medium
CN116309409A (en) Weld defect detection method, system and storage medium
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN110619619A (en) Defect detection method and device and electronic equipment
CN111242899A (en) Image-based flaw detection method and computer-readable storage medium
CN109685756A (en) Image feature automatic identifier, system and method
CN116977239A (en) Defect detection method, device, computer equipment and storage medium
CN111311545A (en) Container detection method, device and computer readable storage medium
CN111311546A (en) Container detection method, device and computer readable storage medium
TW202219494A (en) A defect detection method and a defect detection device
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN114881987A (en) Improved YOLOv 5-based hot-pressing light guide plate defect visual detection method
CN111275682A (en) Container detection method, device and computer readable storage medium
CN114078127A (en) Object defect detection and counting method, device, equipment and storage medium
CN116681677A (en) Lithium battery defect detection method, device and system
CN117576007A (en) Method, device, equipment and storage medium for evaluating negative film image
CN114972757B (en) Tunnel water leakage area identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination