CN117635547B - Method and device for detecting appearance defects of medical gloves - Google Patents

Method and device for detecting appearance defects of medical gloves Download PDF

Info

Publication number
CN117635547B
CN117635547B CN202311522359.0A CN202311522359A CN117635547B CN 117635547 B CN117635547 B CN 117635547B CN 202311522359 A CN202311522359 A CN 202311522359A CN 117635547 B CN117635547 B CN 117635547B
Authority
CN
China
Prior art keywords
medical glove
detection model
station
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311522359.0A
Other languages
Chinese (zh)
Other versions
CN117635547A (en
Inventor
陈宏彩
程煜
任亚恒
吴立龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute Of Applied Mathematics Hebei Academy Of Sciences
Original Assignee
Institute Of Applied Mathematics Hebei Academy Of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute Of Applied Mathematics Hebei Academy Of Sciences filed Critical Institute Of Applied Mathematics Hebei Academy Of Sciences
Priority to CN202311522359.0A priority Critical patent/CN117635547B/en
Publication of CN117635547A publication Critical patent/CN117635547A/en
Application granted granted Critical
Publication of CN117635547B publication Critical patent/CN117635547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The present disclosure provides a method and a device for detecting appearance defects of medical gloves, which belongs to the field of deep learning, and the method comprises: inputting the photographed image of the target medical glove into a pre-trained first detection model to determine the station to which the target medical glove belongs; selecting a weight coefficient adopted by the second detection model based on a station to which the target medical glove belongs; the photographed image is input into a second detection model for determining the weight coefficient to detect the appearance defect of the target medical glove. The method and the device for detecting the appearance defects of the medical gloves improve the detection rate of the appearance defects, particularly small defects, reduce the false detection rate and the omission rate, improve the generalization capability of a detection model, replace manual quality inspection, improve the production efficiency and promote the construction of an intelligent production line.

Description

Method and device for detecting appearance defects of medical gloves
Technical Field
The disclosure belongs to the technical field of deep learning, and more particularly relates to a method and a device for detecting appearance defects of medical gloves.
Background
With the rapid development of artificial intelligence, particularly the development of deep learning technology, defect detection based on a deep neural network model becomes a hot spot for research and application. The existing medical glove appearance defect detection method faces the continuously improved medical glove market authentication system or admission standard, and the medical glove detection method still faces the following unresolved problems:
(1) The glove belongs to flexible products, the defects are various in variety and different in size, the existing medical glove defect detection method faces various defects, and the detection accuracy is still to be improved;
(2) The existing defect detection method based on deep learning cannot effectively detect defect types without pre-training;
(3) The deep learning target detection method has certain difficulty for small target detection, and cannot meet the high standard requirement of medical gloves.
Therefore, how to accurately and efficiently realize the detection of the appearance defects of the medical glove is a problem to be solved by the person skilled in the art.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for detecting appearance defects of medical gloves, so as to improve the detection efficiency and stability of the appearance quality of the medical gloves.
In a first aspect of the embodiments of the present disclosure, a method and an apparatus for detecting an appearance defect of a medical glove are provided, including:
Inputting the photographed image of the target medical glove into a pre-trained first detection model to determine the station to which the target medical glove belongs;
selecting a weight coefficient adopted by the second detection model based on a station to which the target medical glove belongs;
The photographed image is input into a second detection model for determining the weight coefficient to detect the appearance defect of the target medical glove.
In a second aspect of embodiments of the present disclosure, there is provided a medical glove appearance defect detection apparatus, including:
The image input module is used for inputting the shot image of the target medical glove into the first pre-trained detection model so as to determine the station of the target medical glove;
the weight determining module is used for selecting a weight coefficient adopted by the second detection model based on the station to which the target medical glove belongs;
And the image detection module is used for inputting the shot image into a second detection model for determining the weight coefficient so as to detect the appearance defect of the target medical glove.
In a third aspect of the embodiments of the present disclosure, a medical glove appearance defect detection terminal is provided, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the medical glove appearance defect detection method described above when executing the computer program.
In a fourth aspect of the disclosed embodiments, there is provided a computer readable storage medium storing a computer program which when executed by a processor implements the steps of the medical glove appearance defect detection method described above.
The medical glove appearance defect detection method and device provided by the embodiment of the disclosure have the beneficial effects that:
According to the medical glove appearance defect detection method and device, in order to accurately detect the appearance defects of different types of medical gloves, the appearance defect types are classified according to the medical glove images shot by different stations, the weight coefficients corresponding to the different stations are determined by judging the different stations to which the target medical glove belongs, the first detection model is pre-trained, the stations to which the medical glove images belong are determined based on the first detection model identification, and then the different medical glove appearance defect types are accurately identified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required for the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of a method for detecting defects in appearance of a medical glove according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a network structure of a second detection model according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a network structure of an ELAN module according to one embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of an EST-ACmix module network according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of a medical glove appearance defect detection apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic block diagram of a medical glove appearance defect detection terminal according to an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for detecting an appearance defect of a medical glove according to an embodiment of the disclosure, where the method includes:
S101: the shot image of the target medical glove is input into a pre-trained first detection model to determine the station to which the target medical glove belongs.
In this embodiment, the target medical glove is a medical glove to be detected, an image is captured for the target medical glove, the captured image is input into a pre-trained first detection model, the first detection model is a light-weight target detection model, and a reference model such as YOLOv-tini model is provided as the first detection model for identifying and classifying the captured image of the target medical glove, so as to determine a station to which the target medical glove belongs. Different stations can be set to classify the photographed medical glove images, and the embodiment provides a classification reference, for example, the photographed glove images can be classified into different photographing stations: the device comprises a pre-demolding station, a port supporting station and a demolding station.
S102: and selecting a weight coefficient adopted by the second detection model based on the station to which the target medical glove belongs.
In this embodiment, the weight coefficients corresponding to different stations are determined through pre-training, and after the station to which the photographed image of the target medical glove belongs is determined, the corresponding weight coefficient is selected according to the different station to which the photographed image of the target medical glove belongs, where the weight coefficient is the weight coefficient adopted by the second detection model.
The second detection model is a fused target detection model, the embodiment provides a reference model such as YOLOv model, YOLOv is a detection model based on deep learning, the weight coefficient is a coefficient obtained by inputting medical glove images shot by different stations into the first detection model for training, and the size of the weight coefficient can be used for representing the degree of feature extraction and model training.
S103: the photographed image is input into a second detection model for determining the weight coefficient to detect the appearance defect of the target medical glove.
In this embodiment, the captured image of the target medical glove passing through the first detection model is identified and classified, and it is determined that the captured image of the target medical glove is input into the second detection model, where the second detection model is used for detecting the appearance defect of the target medical glove, labeling the position where the appearance defect exists, and outputting the detection result. The types of the appearance defects of the medical glove can be defined and set according to actual production requirements, and the embodiment provides a reference classification, for example, the types of the appearance defects of the medical glove can be divided into five types: the defect= [ '0', '1', '2', '3', '4', ] and the numerals 0,1, 2,3, 4 respectively represent the appearance Defect categories of the medical glove as follows: 'youwu', 'qita', 'posun', 'yuliao', 'weituomo'.
It can be obtained from the above that, according to the method for detecting the appearance defects of the medical glove provided by the embodiment of the disclosure, the station to which the medical glove belongs can be determined according to the photographed image of the target medical glove, and the corresponding weight coefficient is selected and input into the second detection model to detect the appearance defects, so that the appearance defects of the medical glove at different stations can be accurately identified, and the detection accuracy rate and the detection efficiency are improved.
In one embodiment of the disclosure, station labeling is performed on sample images of a plurality of medical gloves to obtain a first training sample set; and training based on the first training sample set to obtain a first detection model.
In this embodiment, the photographed medical glove sample images of the plurality of stations are subjected to station labeling, so as to obtain a first training sample set. The photographing device may include an industrial camera and a light source, among others. Station labeling refers to labeling sample images of medical gloves shot at different stations, and labeling tools can be adopted as LableImg, a flight oar EasyDL and the like.
In this embodiment, the stations to which each sample image belongs may be marked, including, but not limited to, a pre-demolding station, a propping station, and a demolding station.
In this embodiment, the medical glove sample images of a plurality of different stations after the station labeling is completed form a first training sample set, the first training sample set is trained to obtain a first detection model, and the first detection model can automatically identify the stations to which the different medical glove sample images belong.
In one embodiment of the present disclosure, the first detection model is further used to locate a target region of the first image, and resize the first image based on the target region; the target region is the smallest image region that contains the target medical glove.
In this embodiment, the first detection model is further used for performing a series of pre-detection preprocessing operations on the target medical glove image, including: and positioning and adjusting the size of the first image according to the target area of the first image, wherein the first image is a target medical glove currently in a detection state, and the target area is the minimum image area of the target medical glove contained in the first image. The first detection model carries out batch pretreatment on all target medical gloves, the reduced first image is input into the second detection model, the pretreatment process can improve the processing speed of the second detection model, and meanwhile, the small target detection accuracy of the second detection model is improved.
In one embodiment of the disclosure, dividing sample images of a plurality of medical gloves according to stations to which the sample images belong to obtain sample images corresponding to each station; training the second detection model based on the sample image corresponding to each station to determine the weight coefficient of the second detection model corresponding to each station.
In this embodiment, sample images of a plurality of medical gloves photographed at different stations are divided according to the stations to which they belong, so as to obtain sample images corresponding to each station. Labeling each sample image, for example, labeling information is 4 0.09912109375 0.4124348958333333 0.1708984375 0.41080729166666663, where '4' indicates that the appearance defect type of the medical glove is 'weituomo', and 0.09912109375 0.4124348958333333 0.1708984375 0.41080729166666663 indicates that the normalized position coordinates of the center point and the labeling frame of the appearance defect of the medical glove are. And inputting the sample image corresponding to each station into a second detection model for training, and obtaining each weight coefficient corresponding to each station according to a training result, thereby determining the weight coefficient of the second detection model corresponding to each station.
In one embodiment of the present disclosure, the second detection model includes a Backbone network Backbone, a neck network Neck, and a Head network Head; the last layer of the Backbone network Backbone is used for extracting window multi-head self-attention scale features and convolution features, and fusion features are generated based on the window multi-head self-attention scale features and the convolution features.
In this embodiment, as shown in fig. 2, the second detection model includes three parts, namely a Backbone network Backbone, a neck network Neck and a Head network Head, where the Backbone network Backbone is divided into a convolution feature extraction module and a fusion feature generation module, and the convolution feature extraction module is a CBS (Constraint-Based Scheduling Module) convolution module, and is used for deep extracting convolution features; the fused feature generation module is an EST-ACmix module, as shown in fig. 4, and is used for extracting window multi-head self-attention scale features and convolution features, the network structure of the module is based on the fusion of a Swin-transform (window multi-head self-attention scale feature mechanism) module and an ELAN (EFFICIENT LAYER Aggregation Networks, high-efficiency aggregation network) module, the Swin-transform module extracts window multi-head self-attention scale features, the ELAN module extracts convolution features, and the learning ability of the model is effectively strengthened, as shown in fig. 3, which is a structural diagram of the ELAN module. The window multi-head self-attention scale feature and the convolution feature fusion mechanism are formed by connecting the window multi-head self-attention scale feature and the convolution feature through ACmix modules, and fusion is carried out on the extracted window multi-head self-attention scale feature and the convolution feature to generate and output fusion features.
In this embodiment, the neck network Neck is used for compressing and integrating the fusion features output by the Backbone network backhaul, so as to efficiently implement the task of detecting the appearance defects of the medical glove, and the neck network Neck adopts an SPP-PAN network structure, and the network structure uses an SPP (SPATIAL PYRAMID Pooling Layer, pyramid pooling layer) to fuse the PAN (Path Aggregation Network, step-by-step aggregation network) to improve the detection accuracy of the method for detecting the appearance defects of the medical glove.
In this embodiment, the Head network Head is used to predict the position information of the appearance defect of the medical glove and the type information of the appearance defect of the medical glove, and the Head network Head structure adopts a fusion FPN (Feature Pyramid Network, adaptive feature pyramid) and YOLO feature aggregation module to improve the accuracy and speed of the method for detecting the appearance defect of the medical glove.
In one embodiment of the present disclosure, the fusion feature is determined by a first formula:
Wherein, For the fusion feature,/>Window multi-headed self-attention scale features extracted for the backbone network,Convolution features extracted for backbone networks,/>Weight coefficient corresponding to window multi-head self-attention scale feature,/>And i is the number of convolution layers, which is the weight coefficient corresponding to the convolution characteristic.
In this embodiment, the fused feature expression formula generated and output by the Backbone network backhaul is as follows. Wherein/>I.e. the fusion characteristic of the output of the second detection model,/>Refers to window multi-head self-attention scale features extracted by a Swin-transducer module,/>The convolution characteristics extracted by a CBS convolution module in the ELAN module are shown, i is the convolution layer number of the CBS convolution module, and i is the convolution layer number of the CBS convolution moduleAnd/>And the weight coefficients corresponding to the window multi-head self-attention scale feature and the convolution feature respectively. And fusing the window multi-head self-attention scale features and the convolution features, and outputting the fused features.
In one embodiment of the present disclosure, the gradient control module of the Backbone network Backbone and the neck network Neck convolution extraction module CBS are connected by a convolution-self-attention fusion module ACMix.
In this embodiment, the gradient control module of the Backbone network Backbone may be an ELAN module, which is connected to the convolution extraction module CBS module of the neck network Neck through the convolution-self-attention fusion module ACMix, so as to extract the multi-head self-attention scale feature and the convolution feature of the window.
In an embodiment of the present disclosure, an evaluation index may be further set to quantitatively evaluate the second detection model, where the evaluation index includes:
p (Precision), R (Recall), mAP (MEAN AVERAGE Precision, precision mean).
In this embodiment, P refers to the proportion of the number of medical glove images actually having appearance defects in the output result of the second detection model to all the output images, which is a measure of the accuracy of the second detection model; r refers to the proportion of the number of the medical glove images with appearance defects in the output result of the second detection model to the total number of the target medical glove images, and is a measure of the recall ratio of the second detection model; mAP refers to the result of averaging P.
In one embodiment of the present disclosure, an experimental environment may also be configured to experimentally verify the method of detecting the appearance defects of the medical glove.
In this embodiment, a referenceable experimental environment is provided, based on ubuntu18.04 operating system, the graphics card is NVIDIA V100, reference is made to the parameter setting of YOLOv, model training is performed by using YOLOV data enhancement strategy, the batch size during training is 32, the initial learning rate is 0.01, the weight attenuation rate is 0.0005, the momentum factor is 0.937, and the iteration number is 300. And carrying out experimental verification on the method for detecting the appearance defects of the medical glove based on the experimental environment.
In one embodiment of the present disclosure, a comparative experiment may also be provided and the experimental results may be subjected to comparative analysis.
In this embodiment, a reference for a comparative experiment is provided, 7800 target medical glove images are captured and collected, 4000 images of which are randomly selected as a first training sample set, and the remaining 3800 images are used as test sets. The first training sample set and the test set are respectively input into three different methods of the medical glove appearance defect detection method, YOLOv algorithm, improved YOLOv algorithm and CATBiFPN method of the embodiment to carry out comparison experiments, and the experimental results are subjected to comparison analysis and are shown in table 1.
Experimental results show that compared with YOLOv algorithm, the improved YOLOv algorithm improves the accuracy of detecting the appearance defects of the medical gloves to a certain extent, but the processing speed is reduced greatly; compared with the YOLOv algorithm, the CATBiFPN method has the advantages that the detection accuracy is improved slightly, and the processing speed is close to that of the YOLOv algorithm; compared with the other three methods, the method for detecting the appearance defects of the medical glove has the advantages that the accuracy, the recall rate and the mAP are greatly improved, and the processing speed YOLOv algorithm is close. Experimental results prove that the method for detecting the appearance defects of the medical glove can effectively and accurately detect various appearance defects of the medical glove while maintaining the processing speed of the original YOLOv algorithm.
Table 1 comparison of results of different methods
Corresponding to the method for detecting the appearance defects of the medical glove according to the above embodiment, fig. 5 is a block diagram of a device for detecting the appearance defects of the medical glove according to an embodiment of the disclosure. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 5, the medical glove appearance defect detecting apparatus 20 includes: an image input module 21, a weight determination module 22 and an image detection module 23.
The image input module 21 is configured to input a captured image of the target medical glove into the first pre-trained detection model, so as to determine a station to which the target medical glove belongs.
The weight determination module 22 selects the weight coefficient adopted by the second detection model based on the station to which the target medical glove belongs.
The image detection module 23 is used for inputting the shot image into the second detection model for determining the weight coefficient so as to detect the appearance defect of the target medical glove.
In one embodiment of the present disclosure, the image input module 21 is further configured to:
performing station labeling on sample images of a plurality of medical gloves to obtain a first training sample set; and training based on the first training sample set to obtain a first detection model.
In one embodiment of the present disclosure,
The first detection model is also used for locating a target area of the first image and adjusting the size of the first image based on the target area; the target region is the smallest image region that contains the target medical glove.
In one embodiment of the present disclosure, the image input module 21 is further configured to:
dividing sample images of a plurality of medical gloves according to the stations to which the medical gloves belong to obtain sample images corresponding to each station; training the second detection model based on the sample image corresponding to each station to determine the weight coefficient of the second detection model corresponding to each station.
In one embodiment of the present disclosure, the second detection model includes a Backbone network Backbone, a neck network Neck, and a Head network Head; the last layer of the Backbone network Backbone is used for extracting window multi-head self-attention scale features and convolution features, and fusion features are generated based on the window multi-head self-attention scale features and the convolution features.
In one embodiment of the present disclosure, the image input module 21 is specifically configured to:
determining a fusion characteristic through a first formula, wherein the first formula is as follows:
Wherein, For the fusion feature,/>Window multi-headed self-attention scale features extracted for the backbone network,Convolution features extracted for backbone networks,/>Weight coefficient corresponding to window multi-head self-attention scale feature,/>And i is the number of convolution layers, which is the weight coefficient corresponding to the convolution characteristic.
In one embodiment of the present disclosure,
The gradient control module of the Backbone network Backbone and the neck network Neck convolution extraction module CBS are connected by a convolution-self-attention fusion module ACMix.
Referring to fig. 6, fig. 6 is a schematic block diagram of a medical glove appearance defect detection terminal according to an embodiment of the present disclosure. The terminal 300 in the present embodiment as shown in fig. 6 may include: one or more processors 301, one or more input devices 302, one or more output devices 303, and one or more memories 304. The processor 301, the input device 302, the output device 303, and the memory 304 communicate with each other via a communication bus 305. The memory 304 is used to store a computer program comprising program instructions. The processor 301 is configured to execute program instructions stored in the memory 304. Wherein the processor 301 is configured to invoke program instructions to perform the following functions of the modules/units in the above described device embodiments, such as the functions of the modules 21 to 23 shown in fig. 5.
It should be appreciated that in the disclosed embodiments, the Processor 301 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 302 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, etc., and the output device 303 may include a display (LCD, etc.), a speaker, etc.
The memory 304 may include read only memory and random access memory and provides instructions and data to the processor 301. A portion of memory 304 may also include non-volatile random access memory. For example, the memory 304 may also store information of device type.
In a specific implementation, the processor 301, the input device 302, and the output device 303 described in the embodiments of the present disclosure may perform the implementation manners described in the first embodiment and the second embodiment of the method for detecting an appearance defect of a medical glove provided in the embodiments of the present disclosure, and may also perform the implementation manner of the terminal described in the embodiments of the present disclosure, which is not described herein again.
In another embodiment of the disclosure, a computer readable storage medium is provided, where the computer readable storage medium stores a computer program, where the computer program includes program instructions, where the program instructions, when executed by a processor, implement all or part of the procedures in the method embodiments described above, or may be implemented by instructing related hardware by the computer program, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by the processor, implements the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The computer readable storage medium may be an internal storage unit of the terminal of any of the foregoing embodiments, such as a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk provided on the terminal, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the computer-readable storage medium may also include both an internal storage unit of the terminal and an external storage device. The computer-readable storage medium is used to store a computer program and other programs and data required for the terminal. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working procedures of the terminal and the unit described above may refer to the corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In several embodiments provided by the present application, it should be understood that the disclosed terminal and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via some interfaces or units, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purposes of the embodiments of the present disclosure.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a specific embodiment of the present disclosure, but the protection scope of the present disclosure is not limited thereto, and any equivalent modifications or substitutions will be apparent to those skilled in the art within the scope of the present disclosure, and these modifications or substitutions should be covered in the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. The method for detecting the appearance defects of the medical glove is characterized by comprising the following steps of:
Inputting a photographed image of a target medical glove into a pre-trained first detection model to determine a station to which the target medical glove belongs;
selecting a weight coefficient adopted by a second detection model based on a station to which the target medical glove belongs; the second detection model is a YOLOv-based fusion model; the weight coefficient is a coefficient obtained by inputting medical glove images shot at different stations into a first detection model for training and is used for representing the degree of feature extraction and model training;
inputting the photographed image into the second detection model for determining a weight coefficient to detect an appearance defect of the target medical glove.
2. The method for detecting defects in the appearance of a medical glove according to claim 1, further comprising:
Performing station labeling on sample images of a plurality of medical gloves to obtain a first training sample set;
and training based on the first training sample set to obtain a first detection model.
3. The medical glove appearance defect detection method of claim 1, wherein the first detection model is further used to locate a target region of a first image, and to resize the first image based on the target region;
the target area is a minimum image area containing the target medical glove.
4. The method for detecting defects in the appearance of a medical glove according to claim 1, further comprising:
dividing sample images of a plurality of medical gloves according to the stations to which the medical gloves belong to obtain sample images corresponding to each station;
training the second detection model based on the sample image corresponding to each station to determine the weight coefficient of the second detection model corresponding to each station.
5. The method for detecting defects in the appearance of a medical glove according to claim 1, further comprising:
the second detection model comprises a backbone network, a neck network and a head network;
The last layer of the backbone network is used for extracting window multi-head self-attention scale features and convolution features, and fusion features are generated based on the window multi-head self-attention scale features and the convolution features.
6. The method of claim 5, wherein the fusion feature is determined by a first formula:
Wherein the said For the fusion feature, the/>The window multi-headed self-attention scale feature extracted for the backbone network, the/>The convolved features extracted for the backbone network, the/>For the weight coefficient corresponding to the window multi-head self-attention scale feature, the/>For the weight coefficient corresponding to the convolution feature, the/>Is the number of convolutions.
7. The method for detecting defects in appearance of medical gloves according to claim 5, further comprising: the gradient control module of the backbone network and the convolution extraction module of the neck network are connected through a convolution-self-attention fusion module.
8. A medical glove appearance defect detection device, comprising:
The image input module is used for inputting the shot image of the target medical glove into the first pre-trained detection model so as to determine the station of the target medical glove;
The weight determining module is used for selecting a weight coefficient adopted by a second detection model based on a station to which the target medical glove belongs; the second detection model is a YOLOv-based fusion model; the weight coefficient is a coefficient obtained by inputting medical glove images shot at different stations into a first detection model for training and is used for representing the degree of feature extraction and model training;
And the image detection module is used for inputting the shot image into the second detection model for determining the weight coefficient so as to detect the appearance defect of the target medical glove.
9. A medical glove appearance defect detection terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 7 when the computer program is executed by the processor.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202311522359.0A 2023-11-15 2023-11-15 Method and device for detecting appearance defects of medical gloves Active CN117635547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311522359.0A CN117635547B (en) 2023-11-15 2023-11-15 Method and device for detecting appearance defects of medical gloves

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311522359.0A CN117635547B (en) 2023-11-15 2023-11-15 Method and device for detecting appearance defects of medical gloves

Publications (2)

Publication Number Publication Date
CN117635547A CN117635547A (en) 2024-03-01
CN117635547B true CN117635547B (en) 2024-05-14

Family

ID=90024487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311522359.0A Active CN117635547B (en) 2023-11-15 2023-11-15 Method and device for detecting appearance defects of medical gloves

Country Status (1)

Country Link
CN (1) CN117635547B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598858A (en) * 2020-05-12 2020-08-28 上海大学 Method and system for detecting rubber gloves based on transfer learning
CN113834814A (en) * 2021-09-09 2021-12-24 北京云屿科技有限公司 Glove surface defect detection device
CN115063366A (en) * 2022-06-14 2022-09-16 山东瑞邦自动化设备有限公司 Multi-model prediction method for defective glove identification
CN115318671A (en) * 2022-08-04 2022-11-11 山东瑞邦自动化设备有限公司 Defective glove identification and elimination system based on multi-station visual detection
CN116899901A (en) * 2023-07-17 2023-10-20 山东瑞邦智能装备股份有限公司 Defect glove detecting and eliminating system and method
CN219891120U (en) * 2023-04-26 2023-10-24 山东英创智能科技有限公司 Glove defect detection system for double-sided detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598858A (en) * 2020-05-12 2020-08-28 上海大学 Method and system for detecting rubber gloves based on transfer learning
CN113834814A (en) * 2021-09-09 2021-12-24 北京云屿科技有限公司 Glove surface defect detection device
CN115063366A (en) * 2022-06-14 2022-09-16 山东瑞邦自动化设备有限公司 Multi-model prediction method for defective glove identification
CN115318671A (en) * 2022-08-04 2022-11-11 山东瑞邦自动化设备有限公司 Defective glove identification and elimination system based on multi-station visual detection
CN219891120U (en) * 2023-04-26 2023-10-24 山东英创智能科技有限公司 Glove defect detection system for double-sided detection
CN116899901A (en) * 2023-07-17 2023-10-20 山东瑞邦智能装备股份有限公司 Defect glove detecting and eliminating system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CCA-YOLO: An Improved Glove Defect Detection Algorithm Based on YOLOv5;Huilong Jin等;《Applied Sciences》;20230910;第13卷(第18期);1-16 *
基于机器视觉的乳胶手套尺寸测量及缺陷检测研究;张忠凯;《中国优秀硕士学位论文全文数据库 (工程科技Ⅰ辑)》;20210115;B016-771 *
改进的前端轻量级网络工业手套缺陷检测研究;王犇等;《福建电脑》;20230501;第39卷(第05期);16-20 *

Also Published As

Publication number Publication date
CN117635547A (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN111160269A (en) Face key point detection method and device
CN111008640B (en) Image recognition model training and image recognition method, device, terminal and medium
CN110060237A (en) A kind of fault detection method, device, equipment and system
CN111695463B (en) Training method of face impurity detection model and face impurity detection method
CN112819821B (en) Cell nucleus image detection method
CN108986075A (en) A kind of judgment method and device of preferred image
CN116363123B (en) Fluorescence microscopic imaging system and method for detecting circulating tumor cells
CN109190622A (en) Epithelial cell categorizing system and method based on strong feature and neural network
CN115880298A (en) Glass surface defect detection method and system based on unsupervised pre-training
CN109117746A (en) Hand detection method and machine readable storage medium
CN113850799A (en) YOLOv 5-based trace DNA extraction workstation workpiece detection method
CN115439395A (en) Defect detection method and device for display panel, storage medium and electronic equipment
CN115984543A (en) Target detection algorithm based on infrared and visible light images
CN116385430A (en) Machine vision flaw detection method, device, medium and equipment
CN117152484A (en) Small target cloth flaw detection method for improving YOLOv5s
CN115359264A (en) Intensive distribution adhesion cell deep learning identification method
CN117635547B (en) Method and device for detecting appearance defects of medical gloves
CN109508582A (en) The recognition methods of remote sensing image and device
CN114170642A (en) Image detection processing method, device, equipment and storage medium
CN115424000A (en) Pointer instrument identification method, system, equipment and storage medium
CN114037868B (en) Image recognition model generation method and device
CN111582057B (en) Face verification method based on local receptive field
CN115423802A (en) Automatic classification and segmentation method for squamous epithelial tumor cell picture based on deep learning
CN111191575B (en) Naked flame detection method and system based on flame jumping modeling
CN114219073A (en) Method and device for determining attribute information, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant