CN105869154A - Visible foreign matter and bubble classification recognition detection method for medical (250ml) - Google Patents

Visible foreign matter and bubble classification recognition detection method for medical (250ml) Download PDF

Info

Publication number
CN105869154A
CN105869154A CN201610176153.0A CN201610176153A CN105869154A CN 105869154 A CN105869154 A CN 105869154A CN 201610176153 A CN201610176153 A CN 201610176153A CN 105869154 A CN105869154 A CN 105869154A
Authority
CN
China
Prior art keywords
image
network model
defect
hidden layer
elm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610176153.0A
Other languages
Chinese (zh)
Other versions
CN105869154B (en
Inventor
张辉
师统
阮峰
吴成中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Publication of CN105869154A publication Critical patent/CN105869154A/en
Application granted granted Critical
Publication of CN105869154B publication Critical patent/CN105869154B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visible foreign matter and bubble classification recognition detection method for medical large volume injection (250ml), and the method comprises the following steps: 1), continuously obtaining a plurality of images of large volume injection in detection; 2), carrying out the image pre-processing: filtering the images based on Top-Hat morphological filter; 3), carrying out image segmentation: carrying out the segmenting of the filtered images through employing an interframe differential method based on the maximum information entropy; 4), carrying out extraction of defected edges: extracting the edges of the visible foreign matters and bubbles through employing an SUSAN algorithm; 5), carrying out an image feature extraction algorithm: extracting the shape, gray and movement feature parameters for describing foreign matters and bubbles through the research and analysis of the features of defects; 6), carrying out the defect classification and recognition: achieving the recognition and classification of foreign matters and bubbles through employing an IDS-ELM algorithm. The invention achieves the recognition and classification of foreign matters and bubbles, can precisely classify and recognize various types of defects, and enables the products with different types of defects to be removed and placed in different defective product regions.

Description

A kind of 250ml medical large transfusion visible foreign matters and the Classification and Identification detection method of bubble
Technical field
The invention belongs to image procossing and technical field of automation, particularly to a kind of 250ml medical large transfusion visible foreign matters with The Classification and Identification detection method of bubble.
Background technology
Splendid attire material glass bottle that both at home and abroad infusion solutions is main, plastic bottle, plastic soft bag.Vial is still the most defeated at home at present The Primary containers of liquid, and due to infusion solutions production technology and the deficiency of encapsulation technology, tend to be mixed into hair, floating thing (is moulded Material, fiber), chips of glass equal diameter more than the visible foreign matters of 50um, these impurity have badly influenced the health of transfusion person even Life security.The vial infusion solutions of current domestic more than 250ml mainly utilizes artificial lamp inspection or import equipment, artificial lamp inspection to deposit Low in efficiency, be unfavorable for the automatization of production process, have potential safety hazard and the most same shortcoming of standard, and the cost of import equipment Greatly, difficult in maintenance.Full-automatic lamp inspection technology based on machine vision has been carried out research by domestic relevant enterprise or research unit, but Only there is certain success in the medicinal liquid context of detection for 125ml capacity, also have part application of results in producing reality, at present The medical large transfusion visible foreign matters detection Full-automatic light detection equipment being directed to more than 250ml on market is the rarest.Producer is in order to carry The quality of high more than 250ml product and production capacity, have the urgent ` that needs to ask to associated assay devices.Therefore, a set of 250ml is developed Medical large transfusion visible foreign matters detecting system meaning the most great.
In medical fluid foreign body in vivo vision inspection process, need to make to be deposited in the foreign body bottom medicinal liquid with stranding bottle and drive liquid Top, in order to video camera imaging, but due to the factors such as the motion of machinery, temperature, photoelectric interference, the image after imaging In how much there are bubble and noise, and the technical barrier faced in foreign body target and bubble identification in image mainly has:
1) speed that equipment runs ratio is very fast, is therefore meeting the high-precision high speed the most also wanting matching unit to operate, to knowledge Other algorithm proposes higher requirement;
2) identified destination object shape, size, varies in weight, and volume is the most small, and when image taking, foreign body is in fortune Dynamic state and it may happen that upset, attitude constantly changes;
3) interference from external worlds such as bottle walls is the most;
4) equipment can be inevitably generated mechanical vibration when running, and medicine bottle is not to be completely in resting state, and these all may be used Can make the image gathered that certain deviation occurs, general visible detection method will be difficult to quick point of certified products and substandard products Pick.
Summary of the invention
The technical problem to be solved is to provide the Classification and Identification of a kind of 250ml medical large transfusion visible foreign matters and bubble Detection method, the ELM sorting algorithm improved by employing, provide a kind of detection method reliably for medicine detection equipment, thus Improve medicine visible foreign matters accuracy of detection and repeatability, thoroughly solve high the asking of false drop rate in the detection of more than 250ml medical large transfusion Topic, meets the performance requirement of domestic existing light candling system.
A kind of 250ml medical large transfusion visible foreign matters and the Classification and Identification detection method of bubble, comprise the following steps:
Step 1) continuously acquire the original image detecting infusion solutions;
Step 2) Image semantic classification;
To step 1) each frame infusion solutions image of obtaining uses and processes based on Top-Hat morphologic filtering, obtains filtering image;
Step 3) image segmentation;
To step 2) filtering image that obtains uses calculus of finite differences to carry out image segmentation, it is thus achieved that segmentation image;
Step 4) Defect Edge extraction;
From step 3) the segmentation image that obtains extracts the Defect Edge in infusion solutions image;
Described defect includes that visible foreign matters or bubble, described visible foreign matters include chips of glass, hair or floating thing, described floating thing Including scrap gum or fiber;
Step 5) extract defect characteristic vector;
From step 4) Defect Edge that obtains chooses the characteristic parameter for describing defect, form the characteristic vector of defect;
Described characteristic parameter includes parameters for shape characteristic, gray feature parameter and kinematic feature factor;
Described parameters for shape characteristic includes 7 geometric invariant moment of defect target area S, defect target occupancy K and defect, its In, defect target occupancy refer to defect target area number of pixels and with the minimum enclosed rectangle area of defect target area it Ratio;
Described gray feature parameter includes gray average and the gray standard deviation of defect target area of defect target area;
Described kinematic feature factor includes abscissa and the vertical coordinate of the central point of defect target;
Step 6) visible foreign matters and bubble Classification and Identification;
ELM network model is used to realize the Classification and Identification of defect target the defect clarification of objective vector extracted, if defect target Classification and Identification result be visible foreign matters, then corresponding infusion solutions belongs to defective work, and obtains in infusion solutions according to classification results The defect classification comprised;
The building process of described ELM network model is as follows: first set in ELM algorithm network model input node as 13, Output node is 4, and hidden layer node number scope is 100-400, and the activation primitive of hidden layer node includes Hardlim, Sin Or sigmoid function;
Secondly, select the training sample set of eigenvectors input ELM algorithm network model of known defect classification, it be trained, Obtain the ELM network model trained.
Described step 6) the middle ELM network model employing IDS-ELM algorithm structure used, specifically comprise the following steps that
Step1: given sample data set N (xi,ti), concentrate from given sample data and choose training set, xiRepresent i-th sample, tiRepresent the classification results of i-th sample;
Step2: set up ELM network model fL(xi);
Selected initial network model hidden layer node number L=400, hidden layer deviant activation primitive are sigmoid, in (0,1) In randomly select input layer connect hidden layer weight vector ωjWith side-play amount bj
f L ( x i ) = Σ j = 1 L β j g ( ω j x i + b j ) = o i , 1 ≤ j ≤ L
Wherein, ωj=(ωj1j2,...,ωjn)TRepresent the connection weight vector between jth hidden layer node and input node, bjTable Show the deviant of jth hidden layer node, βj=(βj1j2,...,βjm)TRepresent between jth hidden layer node and output node Connection weight vector;οi=(οi1,οi2,...,οim)TFor network model's output that i-th sample is corresponding, g (x) is that sigmoid swashs Function alive, n value is 13, and m value is 4;oiRepresent the classification results that i-th sample exports through ELM network model;
Step3: make oi=ti, calculate ELM network model hidden layer output matrix H, according to β=H+T calculates ELM network model Hidden layer and the connection weights of output layer, T is the output matrix of ELM network model, calculates the training essence of ELM network model Degree train0 and training time time0;
Step4: calculate the disturbance degree I of each hidden layer nodejAnd by fall power sequence, it is thus achieved that the hidden layer node after sequence;
I j = D j S D
D j = 1 L Σ i = 1 N | g j ( x i ) | | | β j | | + a N | | ω j | | , S D = Σ j = 1 L D j
Wherein, gj(xi)=g (ωjxi+bj), 1≤i≤N, 1≤j≤L, a are input layer weight vector factor of influence, a ∈ (0,1);
Step5: ELM network model is carried out first pruning;
From the hidden layer node after the sequence that step4 obtains, it is the most whole for selecting front λ hidden layer node, λ ∈ [1,5], and λ Number;It is deleted from ELM network model, meanwhile, calculates ELM network model training precision train1 after first pruning, And recalculate the disturbance degree of each hidden layer node in the ELM network model after first pruning according to step4, and by fall power Sequence;
Calculate beta pruning coefficient η,For the symbol that rounds up;
Step6: ELM network model is carried out second time beta pruning;
Using η λ as the hidden node number of second time beta pruning, from the hidden layer node dropping power sequence that step5 obtains, before choosing ELM network model after the first pruning that step5 is obtained by λ hidden layer node of η carries out beta pruning, and calculates second time and cut ELM network model training precision train2 after Zhi;
Step7: give for change in second time cut operator, the hidden layer node that the disturbance degree being deleted is maximum, rejoined step6 In the ELM network model obtained, calculate training precision train3 of the ELM network model after updating simultaneously;
Step8: cut the hidden layer node that a disturbance degree is minimum from the ELM network model that step6 obtains, after being updated The training precision of ELM network model be train4;
Step9: determining that ELM network model final hidden node number is L ', training precision is train, and take train=max (train2, Train3, train4), the training time is time:
L ′ = L - ( λ + 1 ) - 1 , t r a i n = t r a i n 4 L - ( λ + 1 ) , t r a i n = t r a i n 2 L - ( λ + 1 ) + 1 , t r a i n = t r a i n 3
Step10: utilize the LS solution of the least norm of contradiction system of linear equations to try to achieve network hidden layer and be connected weights square with output layer Battle array β ', β '=(H ') * T, and update the weight vector ω of input layer connection hidden layerjWith side-play amount bj, it is thus achieved that train Final ELM network model;
Wherein, H ' is the output layer matrix of final ELM network model.
In described morphologic filtering processing procedure, select the circular shuttering of 7 × 7, as structural element, original image carries out high hat state Learn filtering.
In described step 3) in use frame differential method based on maximum informational entropy carry out image segmentation, specifically comprise the following steps that
First the sequence image continuously acquired is carried out difference operation, obtain difference image;
Secondly, image binaryzation threshold value T0 after difference is calculated:
Calculate total number N of pixel in the target area of image to be detected respectively2With ratio p shared by the pixel that gray scale is ii, profit Background and the distribution of foreign body gray value is calculated by following two formulas:
p i + 1 1 - Z s , p i + 2 1 - Z s ... p M 1 - Z s
p 1 Z s , p 2 Z s , ... p i Z s
Wherein,M represents the maximum of gray scale i, then comentropy H (A) of background and target, H (B) respectively can Calculated by following two formula:
H ( A ) = Σ j = s + 1 M p j 1 - Z S l n p j 1 - Z S
H ( B ) = Σ j = 1 M p j Z S l n p j Z S
The total information entropy that can be asked for image to be detected by upper two formulas is Φ (s)=H (A)+H (B), when making Φ (s) take maximum, Obtain image binaryzation threshold value T0 after difference;
Finally, after utilizing difference, differentiated image is done binary conversion treatment by image binaryzation threshold value T0 as the following formula, two will obtained In value image, each pixel is carried out and operation, the symmetric difference bianry image obtained, and completes image and splits:
B ( x , y ) k n 0 , k n + 1 = b l a c k g r o u n d D ( k n 0 , k n 0 + 1 ) < T 0 255 D ( k n , k n + 1 ) &GreaterEqual; T 0
Use SUSAN algorithm from step 3) the segmentation image that obtains extracts the Defect Edge in infusion solutions image, concrete steps As follows:
Utilize mask to travel through each pixel of target area in symmetric difference bianry image, and by mask central pixel point and covering Diaphragm area each pixel interior does gray value and compares, and record gray scale difference value is less than the pixel setting gray scale difference value threshold value, and will The pixel composition USAN region of record;
In mask, in addition to central point, the pixel value of all pixels utilizes following formula to calculate:
C ( r , r 0 ) = 1 | I ( r ) - I ( r 0 ) | &le; t 0 | I ( r ) - I ( r 0 ) | > t
r0Being the position at image core place, r represents remaining some location, I (r in template0) represent image core point pixel value, I (r) represents the pixel value of other point in image template;
Then utilize following formula to calculate the USAN value of masked areas:
n ( x 0 , y 0 ) = &Sigma; ( x , y ) &NotEqual; ( x 0 , y 0 ) c ( x , y )
Wherein, (x0,y0) represent it is current mask central point, (x, y) represents current mask pixel in addition to central point, and n is Number of pixels in USAN region, then by compared with default USAN threshold values, and utilize following formula to obtain suspicious characteristic point, And centered by characteristic point, by characteristic point compared with 8 some gray values in other neighborhood, maximum is retained when work Marginal point for final:
R ( x 0 , y 0 ) = g - n ( x 0 , y 0 ) , n ( x 0 , y 0 ) < g 0 , n ( x 0 , y 0 ) &GreaterEqual; g
Wherein, g=nmax/ 2 is USAN threshold values, nmaxIt is the maximum of n, takes the 3/4 of mask.
For there being the image of influence of noise, SUSAN bottom threshold value takes 2-10 pixel.
Beneficial effect
The invention provides the Classification and Identification detection method of a kind of 250ml medical large transfusion visible foreign matters and bubble, including following step Rapid: 1) continuously acquire and detect infusion solutions multiple image;2) Image semantic classification: image is used based on Top-Hat form Learn filtering;3) image segmentation: the image after using frame differential method based on maximum informational entropy to be split filtered image; 4 Defect Edges extract: use SUSAN algorithm to extract the edge of visible foreign matters and bubble;5) image characteristics extraction algorithm: By researching and analysing the feature extraction of defect for describing visible foreign matters and the shape of bubble, gray scale, kinematic feature factor 6) lack Fall into Classification and Identification: use IDS-ELM algorithm to realize visible foreign matters and the identification of bubble and classification.
By using Mathematical Morphology Filtering to obtain pretreatment image, its algorithm can pass through hardware parallel realization, substantially increase process Speed.Splitting for infusion solutions image, using inter-frame difference algorithm based on maximum informational entropy well to overcome bottle sidewall may There are differences, defect that simple sequence image difference is bad to small foreign body target detection effect, and significantly improve output figure The signal to noise ratio of picture, uses SUSAN algorithm to extract the edge of visible foreign matters and bubble, enormously simplify subsequent characteristics extraction algorithm Difficulty.The characteristic parameter of analysis and research visible foreign matters and bubble also constructs the Feature Descriptor describing defect, decreases image The process time, substantially increase real-time and the robustness of algorithm.IDS-ELM algorithm is used to achieve visible foreign matters and bubble Classification and Identification, can all kinds of defect of Classification and Identification by containing different types of eliminating defects to the most different substandard products region accurately.
Accompanying drawing explanation
Fig. 1 is the overall block flow diagram of method involved in the present invention;
Fig. 2 is the continuous four frame original images that in the present invention, the glucose medicinal liquid of the 250ml of acquisition contains a chips of glass, wherein, A) it is the first frame, b) is the second frame, c) be the 3rd frame, d) be the 4th frame;
Fig. 3 is the image after in the present invention being filtered two field picture each in Fig. 2, and wherein, (a) is the first frame, and (b) is second Frame, (c) is the 3rd frame, and (d) is the 4th frame;
Fig. 4 be in the present invention to continuous five two field pictures filtering, do the image after edge extracting after inter-frame difference;
Fig. 5 is the image that in the present invention, four quasi-representative defects extract the minimum external world, target area rectangle, and wherein (a) is hair, (b)
For floating thing, (c) is chips of glass, and (d) is bubble;
Fig. 6 is the gray-scale map of four quasi-representative defect areas in the present invention, and wherein, wherein (a) is hair, and (b) is floating thing, (c) For chips of glass, (d) is bubble;
Fig. 7 is heretofore described IDS-ELM algorithm flow chart;
Fig. 8 is that the classification experiments of all kinds of foreign bodies is compared schematic diagram by each algorithm.
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described further.
As it is shown in figure 1, a kind of 250ml medical large transfusion visible foreign matters and the Classification and Identification detection method of bubble, including following step Rapid:
Step 1) continuously acquire the original image detecting infusion solutions, as shown in Figure 2;
Step 2) Image semantic classification, as shown in Figure 3;
To step 1) each frame infusion solutions image of obtaining uses the circular shuttering of 7 × 7 to carry out based on Top-Hat as structural element Morphologic filtering processes, and obtains filtering image;
Step 3) image segmentation;
To step 2) filtering image that obtains uses calculus of finite differences to carry out image segmentation, it is thus achieved that segmentation image;
In described step 3) in use frame differential method based on maximum informational entropy carry out image segmentation, specifically comprise the following steps that
First the sequence image continuously acquired is carried out difference operation, obtain difference image;
Secondly, image binaryzation threshold value T0 after difference is calculated:
Calculate total number N of pixel in the target area of image to be detected respectively2With ratio p shared by the pixel that gray scale is ii, profit Background and the distribution of foreign body gray value is calculated by following two formulas:
p i + 1 1 - Z s , p i + 2 1 - Z s ... p M 1 - Z s
p 1 Z s , p 2 Z s , ... p i Z s
Wherein,M represents the maximum of gray scale i, then comentropy H (A) of background and target, H (B) respectively can Calculated by following two formula:
H ( A ) = &Sigma; j = s + 1 M p j 1 - Z S l n p j 1 - Z S
H ( B ) = &Sigma; j = 1 M p j Z S l n p j Z S
The total information entropy that can be asked for image to be detected by upper two formulas is Φ (s)=H (A)+H (B), when making Φ (s) take maximum, Obtain binary-state threshold T0, T0=120;
Finally, utilize binary-state threshold T0 that differentiated image is done binary conversion treatment as the following formula, in the binary image that will obtain Each pixel is carried out and operation, the symmetric difference bianry image obtained, and completes image and splits:
B ( x , y ) k n 0 , k n 0 + 1 = b l a c k g r o u n d D ( k n 0 , k n 0 + 1 ) < T 0 255 D ( k n , k n + 1 ) &GreaterEqual; T 0
Step 4) Defect Edge extraction;
From step 3) the segmentation image that obtains extracts the Defect Edge in infusion solutions image;
Described defect includes that visible foreign matters or bubble, described visible foreign matters include chips of glass, hair or floating thing, described floating thing Including scrap gum or fiber;
Use SUSAN algorithm from step 3) the segmentation image that obtains extracts the Defect Edge in infusion solutions image, concrete steps As follows:
Utilize mask to travel through each pixel of target area in symmetric difference bianry image, and by mask central pixel point and covering Film each pixel interior does gray value and compares, and record gray scale difference value is less than the pixel setting gray scale difference value threshold value, and by record Pixel composition USAN region;
In mask, in addition to central point, the pixel value of all pixels utilizes following formula to calculate:
C ( r , r 0 ) = 1 | I ( r ) - I ( r 0 ) | &le; t 0 | I ( r ) - I ( r 0 ) | > t
r0Being the position at image core place, r represents remaining some location, I (r in template0) represent image core point pixel value, I (r) represents the pixel value of other point in image template;
Then utilize following formula to calculate the USAN value of masked areas:
n ( x 0 , y 0 ) = &Sigma; ( x , y ) &NotEqual; ( x 0 , y 0 ) c ( x , y )
Wherein, (x0,y0) represent it is current mask central point, (x, y) represents current mask pixel in addition to central point, and n is Number of pixels in USAN region, then by compared with default USAN threshold values, and utilize following formula to obtain suspicious characteristic point, And centered by characteristic point, by characteristic point compared with 8 some gray values in other neighborhood, maximum is retained when work Marginal point for final:
R ( x 0 , y 0 ) = g - n ( x 0 , y 0 ) , n ( x 0 , y 0 ) < g 0 , n ( x 0 , y 0 ) &GreaterEqual; g
Wherein, g=nmax/ 2 is USAN threshold values, nmaxIt is the maximum of n, takes the 3/4 of mask.
For there being the image of influence of noise, SUSAN bottom threshold value takes 2-10 pixel.
As shown in Figure 4, for carrying out edge extracting result after infusion solutions image filtering, difference;
Step 5) extract defect characteristic vector;
From step 4) Defect Edge that obtains chooses the characteristic parameter for describing defect, form the characteristic vector of defect;
Described characteristic parameter includes parameters for shape characteristic, gray feature parameter and kinematic feature factor;
Described parameters for shape characteristic includes 7 geometric invariant moment of defect target area S, defect target occupancy K and defect, its In, defect target occupancy refer to defect target area number of pixels and with the minimum enclosed rectangle area of defect target area it Ratio;
Described gray feature parameter includes gray average and the gray standard deviation of defect target area of defect target area;
Described kinematic feature factor includes abscissa and the vertical coordinate of the central point of defect target;
As it is shown in figure 5, be the extraction result schematic diagram of visible foreign matters and bubble area minimum enclosed rectangle, as it is shown in fig. 7, it is Infusion solutions visible foreign matters and the gray-scale map of bubble area;
Step 6) visible foreign matters and bubble Classification and Identification;
ELM network model is used to realize the Classification and Identification of defect target the defect clarification of objective vector extracted, if defect target Classification and Identification result be visible foreign matters, then corresponding infusion solutions belongs to defective work, and obtains in infusion solutions according to classification results The defect classification comprised;Otherwise, corresponding infusion solutions is certified products, thus completes visible foreign matters and the detection of bubble in infusion solutions;
The building process of described ELM network model is as follows: first set in ELM algorithm network model input node as 13, Output node is 4, and hidden layer node number scope is 100-400, and the activation primitive of hidden layer node includes Hardlim, Sin Or sigmoid function, the present embodiment is chosen sigmoid function as activation primitive;
Secondly, select the training sample set of eigenvectors input ELM algorithm network model of known defect classification, it be trained, Obtain the ELM network model trained.
As shown in Figure 8, described step 6) the middle ELM network model employing IDS-ELM algorithm structure used, concrete steps As follows:
Step1: given sample data set N (xi,ti), concentrate from given sample data and choose training set, xiRepresent i-th sample, tiRepresent the classification results of i-th sample;
Step2: set up ELM network model fL(xi);
Selected initial network model hidden layer node number L=400, hidden layer deviant activation primitive are sigmoid, in (0,1) In randomly select input layer connect hidden layer weight vector ωjWith side-play amount bj
f L ( x i ) = &Sigma; j = 1 L &beta; j g ( &omega; j x i + b j ) = o i , 1 &le; j &le; L
Wherein, ωj=(ωj1j2,...,ωjn)TRepresent the connection weight vector between jth hidden layer node and input node, bjTable Show the deviant of jth hidden layer node, βj=(βj1j2,...,βjm)TRepresent between jth hidden layer node and output node Connection weight vector;οi=(οi1,οi2,...,οim)TFor network model's output that i-th sample is corresponding, g (x) is that sigmoid swashs Function alive, n value is 13, and m value is 4;oiRepresent the classification results that i-th sample exports through ELM network model;
Step3: make oi=ti, calculate ELM network model hidden layer output matrix H, according to β=H+T calculates ELM network model Hidden layer and the connection weights of output layer, T is the output matrix of ELM network model, calculates the training essence of ELM network model Degree train0 and training time time0;
Step4: calculate the disturbance degree I of each hidden layer nodejAnd by fall power sequence, it is thus achieved that the hidden layer node after sequence;
I j = D j S D
D j = 1 L &Sigma; i = 1 N | g j ( x i ) | | | &beta; j | | + a N | | &omega; j | | , S D = &Sigma; j = 1 L D j
Wherein, gj(xi)=g (ωjxi+bj), 1≤i≤N, 1≤j≤L, a are input layer weight vector factor of influence, a ∈ (0,1);
Step5: ELM network model is carried out first pruning;
From the hidden layer node after the sequence that step4 obtains, it is the most whole for selecting front λ hidden layer node, λ ∈ [1,5], and λ Number;It is deleted from ELM network model, meanwhile, calculates ELM network model training precision train1 after first pruning, And recalculate the disturbance degree of each hidden layer node in the ELM network model after first pruning according to step4, and by fall power Sequence;
Calculate beta pruning coefficient η,For the symbol that rounds up;
Step6: ELM network model is carried out second time beta pruning;
Using η λ as the hidden node number of second time beta pruning, from the hidden layer node dropping power sequence that step5 obtains, before choosing ELM network model after the first pruning that step5 is obtained by λ hidden layer node of η carries out beta pruning, and calculates second time and cut ELM network model training precision train2 after Zhi;
Step7: find in second time cut operator, the hidden layer node that the disturbance degree being deleted is maximum, rejoined step6 In the ELM network model obtained, calculate training precision train3 of the ELM network model after updating simultaneously;
Step8: cut the hidden layer node that a disturbance degree is minimum from the ELM network model that step6 obtains, after being updated The training precision of ELM network model be train4;
Step9: determining that ELM network model final hidden node number is L ', training precision is train, and take train=max (train2, Train3, train4), the training time is time:
L &prime; = L - ( &lambda; + 1 ) - 1 , t r a i n = t r a i n 4 L - ( &lambda; + 1 ) , t r a i n = t r a i n 2 L - ( &lambda; + 1 ) + 1 , t r a i n = t r a i n 3
Step10: utilize the LS solution of the least norm of contradiction system of linear equations to try to achieve network hidden layer and be connected weights square with output layer Battle array β ', β '=(H ') * T, and update the weight vector ω of input layer connection hidden layerjWith side-play amount bj, it is thus achieved that train Final ELM network model;
Wherein, H ' is the output layer matrix of final ELM network model.
According to the infusion solutions data set analysis gathered, the input dimension of grader is 13, and output dimension is 4, according to UCI The classification results of data set experiment combines this experimental data set analysis, sets the initial hidden node of IDS-ELM algorithm primitive network Number is 400, and factor of influence is that a takes 0.6, and beta pruning step-length λ is 2, and P-ELM initial hidden node number is also 400.In order to make Experimental result more persuasion property, asks for 20 and repeats to test time meansigma methods as final classification results.Table 2 gives related algorithm pin General classification result to table 1 sample data set, table 3 gives all kinds of algorithm for all kinds of defect classification results.
Table 1 experiment sample data set
Table repeats to test algorithms of different classification performance for 2 20 times and compares
From upper table experimental result, in nicety of grading, IDS-ELM algorithm of the present invention will be substantially better than EM-ELM, P-ELM algorithm, and it has been respectively increased 4.9 than 90.6% of the BP in open source literature and the 93.8 of SVM Percentage point and 1.7 percentage points, improve 8.4% than the 87.1% of ELM algorithm.Although IDS-ELM algorithm in terms of the time More slightly longer than some of which algorithm, but the process time of single sample reach Millisecond, fully meet online detection requirements.
Table 3 describes related algorithm for hair, floating thing (fiber, scrap gum), chips of glass and the classification experiments result of bubble. In all kinds of defects, the classification accuracy of hair is the highest, has reached 99.7%, and this is owing to the resemblance of hair is Be easily discernible, extracting in terms of resemblance, there is good stability, secondly fiber and chips of glass be respectively 96.9%, 94.1%.Although bubble existing certain mistake divide, but IDS-ELM algorithm having reached 91.2% to the classification and recognition of bubble, Exceed 12.9% and 9.4% respectively than EM-ELM and P-ELM algorithm, exceed 11.2% than ELM algorithm, as a whole, Or Bubble Region can be branched away, thus realize the Classification and Identification of infusion solutions visible foreign matters.
Table repeats for 3 20 times to test algorithms of different to dissimilar sample classification result
In order to the IDS-ELM grader stability when infusion solutions visible foreign matters Classification and Identification is described, table 4 gives in experiment 1 IDS-ELM is for the result of overall experimental data set sample 20 subseries.Statistical table 4 experimental data mark difference is only 0.064, Illustrate that IDS-ELM is extremely stable in the Classification and Identification detection of visible foreign matters.
Table 4IDS-ELM algorithm repeats experimental result 20 times to class defect
Related data sample in conjunction with medical large transfusion four class defect is classified identifying experiment, and Comprehensive Experiment result illustrates, this It is practicable for inventing the visible foreign matters ONLINE RECOGNITION detection in medical large transfusion actual production of described method.

Claims (6)

1. a 250ml medical large transfusion visible foreign matters and the Classification and Identification detection method of bubble, it is characterised in that include following step Rapid:
Step 1) continuously acquire the original image detecting infusion solutions;
Step 2) Image semantic classification;
To step 1) each frame infusion solutions image of obtaining uses and processes based on Top-Hat morphologic filtering, obtains filtering image;
Step 3) image segmentation;
To step 2) filtering image that obtains uses calculus of finite differences to carry out image segmentation, it is thus achieved that segmentation image;
Step 4) Defect Edge extraction;
From step 3) the segmentation image that obtains extracts the Defect Edge in infusion solutions image;
Described defect includes that visible foreign matters or bubble, described visible foreign matters include chips of glass, hair or floating thing, described floating thing Including scrap gum or fiber;
Step 5) extract defect characteristic vector;
From step 4) Defect Edge that obtains chooses the characteristic parameter for describing defect, form the characteristic vector of defect;
Described characteristic parameter includes parameters for shape characteristic, gray feature parameter and kinematic feature factor;
Described parameters for shape characteristic includes 7 geometric invariant moment of defect target area S, defect target occupancy K and defect, its In, defect target occupancy refer to defect target area number of pixels and with the minimum enclosed rectangle area of defect target area it Ratio;
Described gray feature parameter includes gray average and the gray standard deviation of defect target area of defect target area;
Described kinematic feature factor includes abscissa and the vertical coordinate of the central point of defect target;
Step 6) visible foreign matters and bubble Classification and Identification;
ELM network model is used to realize the Classification and Identification of defect target the defect clarification of objective vector extracted, if defect target Classification and Identification result be visible foreign matters, then corresponding infusion solutions belongs to defective work, and obtains in infusion solutions according to classification results The defect classification comprised;
The building process of described ELM network model is as follows: first set in ELM algorithm network model input node as 13, Output node is 4, and hidden layer node number scope is 100-400, and the activation primitive of hidden layer node includes Hardlim, Sin Or sigmoid function;
Secondly, select the training sample set of eigenvectors input ELM algorithm network model of known defect classification, it be trained, Obtain the ELM network model trained.
Method the most according to claim 1, it is characterised in that described step 6) the middle ELM network model employing used IDS-ELM algorithm builds, and specifically comprises the following steps that
Step1: given sample data set N (xi,ti), concentrate from given sample data and choose training set, xiRepresent i-th sample, tiRepresent the classification results of i-th sample;
Step2: set up ELM network model fL(xi);
Selected initial network model hidden layer node number L=400, hidden layer deviant activation primitive are sigmoid, in (0,1) In randomly select input layer connect hidden layer weight vector ωjWith side-play amount bj
f L ( x i ) = &Sigma; j = 1 L &beta; j g ( &omega; j x i + b j ) = o i , 1 &le; j &le; L
Wherein, ωj=(ωj1j2,...,ωjn)TRepresent the connection weight vector between jth hidden layer node and input node, bjTable Show the deviant of jth hidden layer node, βj=(βj1j2,...,βjm)TRepresent between jth hidden layer node and output node Connection weight vector;οi=(οi1,οi2,...,οim)TFor network model's output that i-th sample is corresponding, g (x) is that sigmoid swashs Function alive, n value is 13, and m value is 4;oiRepresent the classification results that i-th sample exports through ELM network model;
Step3: make oi=ti, calculate ELM network model hidden layer output matrix H, according to β=H+T calculates ELM network model Hidden layer and the connection weights of output layer, T is the output matrix of ELM network model, calculates the training essence of ELM network model Degree train0 and training time time0;
Step4: calculate the disturbance degree I of each hidden layer nodejAnd by fall power sequence, it is thus achieved that the hidden layer node after sequence;
I j = D j S D
D j = 1 L &Sigma; i = 1 N | g j ( x i ) | | | &beta; j | | + a N | | &omega; j | | , S D = &Sigma; j = 1 L D j
Wherein, gj(xi)=g (ωjxi+bj), 1≤i≤N, 1≤j≤L, a are input layer weight vector factor of influence, a ∈ (0,1);
Step5: ELM network model is carried out first pruning;
From the hidden layer node after the sequence that step4 obtains, it is the most whole for selecting front λ hidden layer node, λ ∈ [1,5], and λ Number;It is deleted from ELM network model, meanwhile, calculates ELM network model training precision train1 after first pruning, And recalculate the disturbance degree of each hidden layer node in the ELM network model after first pruning according to step4, and by fall power Sequence;
Calculate beta pruning coefficient η, For the symbol that rounds up;
Step6: ELM network model is carried out second time beta pruning;
Using η λ as the hidden node number of second time beta pruning, from the hidden layer node dropping power sequence that step5 obtains, before choosing ELM network model after the first pruning that step5 is obtained by λ hidden layer node of η carries out beta pruning, and calculates second time and cut ELM network model training precision train2 after Zhi;
Step7: give for change in second time cut operator, the hidden layer node that the disturbance degree being deleted is maximum, rejoined step6 In the ELM network model obtained, calculate training precision train3 of the ELM network model after updating simultaneously;
Step8: cut the hidden layer node that a disturbance degree is minimum from the ELM network model that step6 obtains, after being updated The training precision of ELM network model be train4;
Step9: determining that ELM network model final hidden node number is L ', training precision is train, and take train=max (train2, Train3, train4), the training time is time:
L &prime; = L - ( &lambda; + 1 ) - 1 , t r a i n = t r a i n 4 L - ( &lambda; + 1 ) , t r a i n = t r a i n 2 L - ( &lambda; + 1 ) + 1 , t r a i n = t r a i n 3
Step10: utilize the LS solution of the least norm of contradiction system of linear equations to try to achieve network hidden layer and be connected weights square with output layer Battle array β ', β '=(H ') * T, and update the weight vector ω of input layer connection hidden layerjWith side-play amount bj, it is thus achieved that train Final ELM network model;
Wherein, H ' is the output layer matrix of final ELM network model.
Method the most according to claim 2, it is characterised in that in described morphologic filtering processing procedure, selects 7 × 7 Circular shuttering carries out high cap morphologic filtering as structural element to original image.
Method the most according to claim 2, it is characterised in that in described step 3) middle employing frame based on maximum informational entropy Between calculus of finite differences carry out image segmentation, specifically comprise the following steps that
First the sequence image continuously acquired is carried out difference operation, obtain difference image;
Secondly, image binaryzation threshold value T0 after difference is calculated:
Calculate total number N of pixel in the target area of image to be detected respectively2With ratio p shared by the pixel that gray scale is ii, profit Background and the distribution of foreign body gray value is calculated by following two formulas:
p i + 1 1 - Z s , p i + 2 1 - Z s ... p M 1 - Z s
p 1 Z s , p 2 Z s , ... p i Z s
Wherein,M represents the maximum of gray scale i, then comentropy H (A) of background and target, H (B) respectively can Calculated by following two formula:
H ( A ) = &Sigma; j = s + 1 M p j 1 - Z S l n p j 1 - Z S
H ( B ) = &Sigma; j = 1 M p j Z S l n p j Z S
The total information entropy that can be asked for image to be detected by upper two formulas is Φ (s)=H (A)+H (B), when making Φ (s) take maximum, Obtain image binaryzation threshold value T0 after difference;
Finally, after utilizing difference, differentiated image is done binary conversion treatment by image binaryzation threshold value T0 as the following formula, two will obtained In value image, each pixel is carried out and operation, the symmetric difference bianry image obtained, and completes image and splits:
B ( x , y ) k n o , k n o + 1 = b l a c k g r o u n d D ( k n o , k n o + 1 ) < T 0 255 D ( k n o , k n o + 1 ) &GreaterEqual; T 0
Method the most according to claim 4, it is characterised in that use SUSAN algorithm from step 3) the segmentation figure that obtains Extract the Defect Edge in infusion solutions image in Xiang, specifically comprise the following steps that
Utilize mask to travel through each pixel of target area in symmetric difference bianry image, and by mask central pixel point and covering Diaphragm area each pixel interior does gray value and compares, and record gray scale difference value is less than the pixel setting gray scale difference value threshold value, and will The pixel composition USAN region of record;
In mask, in addition to central point, the pixel value of all pixels utilizes following formula to calculate:
C ( r , r 0 ) = 1 | I ( r ) - I ( r 0 ) | &le; t 0 | I ( r ) - I ( r 0 ) | > t
r0Being the position at image core place, r represents remaining some location, I (r in template0) represent image core point pixel value, I (r) represents the pixel value of other point in image template;
Then utilize following formula to calculate the USAN value of masked areas:
n ( x 0 , y 0 ) = &Sigma; ( x , y ) &NotEqual; ( x o , y o ) c ( x , y )
Wherein, (x0,y0) represent it is current mask central point, (x, y) represents current mask pixel in addition to central point, and n is Number of pixels in USAN region, then by compared with default USAN threshold values, and utilize following formula to obtain suspicious characteristic point, And centered by characteristic point, by characteristic point compared with 8 some gray values in other neighborhood, maximum is retained when work Marginal point for final:
R ( x 0 , y 0 ) = g - n ( x 0 , y 0 ) , n ( x 0 , y 0 ) < g 0 , n ( x 0 , y 0 ) &GreaterEqual; g
Wherein, g=nmax/ 2 is USAN threshold values, nmaxIt is the maximum of n, takes the 3/4 of mask.
Method the most according to claim 5, it is characterised in that for there being the image of influence of noise, SUSAN bottom threshold Value takes 2-10 pixel.
CN201610176153.0A 2015-09-23 2016-03-28 A kind of Classification and Identification detection method of 250ml medical large transfusions visible foreign matters and bubble Expired - Fee Related CN105869154B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510611907 2015-09-23
CN2015106119076 2015-09-23

Publications (2)

Publication Number Publication Date
CN105869154A true CN105869154A (en) 2016-08-17
CN105869154B CN105869154B (en) 2018-10-16

Family

ID=56625845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610176153.0A Expired - Fee Related CN105869154B (en) 2015-09-23 2016-03-28 A kind of Classification and Identification detection method of 250ml medical large transfusions visible foreign matters and bubble

Country Status (1)

Country Link
CN (1) CN105869154B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107024480A (en) * 2017-04-12 2017-08-08 浙江硕和机器人科技有限公司 A kind of stereoscopic image acquisition device
CN108520260A (en) * 2018-04-11 2018-09-11 中南大学 The recognition methods of visible foreign matters in bottled oral solution
CN108760762A (en) * 2018-07-25 2018-11-06 德玛克(长兴)自动化系统有限公司 A kind of fully-automatic intelligent lamp inspection equipment
CN109143393A (en) * 2018-08-01 2019-01-04 哈尔滨工业大学 Bottled transparent medical fluid foreign bodies detection synchronized tracking visual compensation
CN111062257A (en) * 2019-11-21 2020-04-24 四川极智朗润科技有限公司 Micro target identification method based on morphological and kinematic characteristics
CN111415348A (en) * 2020-03-25 2020-07-14 中国计量大学 Method for extracting characteristics of bubbles in automobile brake pipeline
CN111709948A (en) * 2020-08-19 2020-09-25 深兰人工智能芯片研究院(江苏)有限公司 Method and device for detecting defects of container
CN111805541A (en) * 2020-07-08 2020-10-23 南京航空航天大学 Deep learning-based traditional Chinese medicine decoction piece cleaning and selecting device and cleaning and selecting method
CN112362674A (en) * 2020-11-13 2021-02-12 四川科伦药业股份有限公司 Infusion bag foreign matter identification method and detection method based on artificial intelligence vision technology
CN112991294A (en) * 2021-03-12 2021-06-18 梅特勒-托利多(常州)测量技术有限公司 Foreign matter detection method, apparatus and computer readable medium
WO2021135331A1 (en) * 2019-12-30 2021-07-08 歌尔股份有限公司 Product defect detection method, apparatus and system
CN115290697A (en) * 2022-09-26 2022-11-04 南通众盈材料科技有限公司 Polyurethane production abnormity identification method
CN116973311A (en) * 2023-09-22 2023-10-31 成都中嘉微视科技有限公司 Detection device and detection method for foreign matters on film and under film

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303316A (en) * 2008-06-30 2008-11-12 湖南大学 Method and apparatus for automatic detection on large infusion production line
CN101354359A (en) * 2008-09-04 2009-01-28 湖南大学 Method for detecting, tracking and recognizing movement visible exogenous impurity in medicine liquid
CN104835166A (en) * 2015-05-13 2015-08-12 山东大学 Liquid medicine bottle foreign matter detection method based on machine visual detection platform
CN104850858A (en) * 2015-05-15 2015-08-19 华中科技大学 Injection-molded product defect detection and recognition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101303316A (en) * 2008-06-30 2008-11-12 湖南大学 Method and apparatus for automatic detection on large infusion production line
CN101354359A (en) * 2008-09-04 2009-01-28 湖南大学 Method for detecting, tracking and recognizing movement visible exogenous impurity in medicine liquid
CN104835166A (en) * 2015-05-13 2015-08-12 山东大学 Liquid medicine bottle foreign matter detection method based on machine visual detection platform
CN104850858A (en) * 2015-05-15 2015-08-19 华中科技大学 Injection-molded product defect detection and recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YOAN MICHE,ET AL.: "OP-ELM:Optimally Pruned Extreme Learning Machine", 《IEEE TRANSACTIONS ON NEURAL NETWORKS》 *
张辉 等: "高速医药自动化生产线大输液视觉检测与识别技术", 《控制理论与应用》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107024480A (en) * 2017-04-12 2017-08-08 浙江硕和机器人科技有限公司 A kind of stereoscopic image acquisition device
CN108520260A (en) * 2018-04-11 2018-09-11 中南大学 The recognition methods of visible foreign matters in bottled oral solution
CN108760762A (en) * 2018-07-25 2018-11-06 德玛克(长兴)自动化系统有限公司 A kind of fully-automatic intelligent lamp inspection equipment
CN108760762B (en) * 2018-07-25 2024-04-26 德玛克(长兴)注塑系统有限公司 Full-automatic intelligent lamp inspection equipment
CN109143393A (en) * 2018-08-01 2019-01-04 哈尔滨工业大学 Bottled transparent medical fluid foreign bodies detection synchronized tracking visual compensation
CN109143393B (en) * 2018-08-01 2020-06-30 哈尔滨工业大学 Synchronous tracking visual compensation method for detecting foreign matters in bottled transparent liquid medicine
CN111062257A (en) * 2019-11-21 2020-04-24 四川极智朗润科技有限公司 Micro target identification method based on morphological and kinematic characteristics
US11295435B2 (en) 2019-12-30 2022-04-05 Goertek Inc. Product defect detection method, device and system
WO2021135331A1 (en) * 2019-12-30 2021-07-08 歌尔股份有限公司 Product defect detection method, apparatus and system
CN111415348B (en) * 2020-03-25 2023-05-26 中国计量大学 Method for extracting bubble characteristics in automobile brake pipeline
CN111415348A (en) * 2020-03-25 2020-07-14 中国计量大学 Method for extracting characteristics of bubbles in automobile brake pipeline
CN111805541A (en) * 2020-07-08 2020-10-23 南京航空航天大学 Deep learning-based traditional Chinese medicine decoction piece cleaning and selecting device and cleaning and selecting method
CN111805541B (en) * 2020-07-08 2022-08-30 南京航空航天大学 Deep learning-based traditional Chinese medicine decoction piece cleaning and selecting device and cleaning and selecting method
CN111709948A (en) * 2020-08-19 2020-09-25 深兰人工智能芯片研究院(江苏)有限公司 Method and device for detecting defects of container
CN112362674A (en) * 2020-11-13 2021-02-12 四川科伦药业股份有限公司 Infusion bag foreign matter identification method and detection method based on artificial intelligence vision technology
WO2022100145A1 (en) * 2020-11-13 2022-05-19 四川科伦药业股份有限公司 Infusion bag foreign matter identification method and detection method based on artificial intelligence visual technology
CN112991294A (en) * 2021-03-12 2021-06-18 梅特勒-托利多(常州)测量技术有限公司 Foreign matter detection method, apparatus and computer readable medium
CN115290697A (en) * 2022-09-26 2022-11-04 南通众盈材料科技有限公司 Polyurethane production abnormity identification method
CN116973311A (en) * 2023-09-22 2023-10-31 成都中嘉微视科技有限公司 Detection device and detection method for foreign matters on film and under film
CN116973311B (en) * 2023-09-22 2023-12-12 成都中嘉微视科技有限公司 Detection device and detection method for foreign matters on film and under film

Also Published As

Publication number Publication date
CN105869154B (en) 2018-10-16

Similar Documents

Publication Publication Date Title
CN105869154A (en) Visible foreign matter and bubble classification recognition detection method for medical (250ml)
CN110414368B (en) Unsupervised pedestrian re-identification method based on knowledge distillation
CN107451607B (en) A kind of personal identification method of the typical character based on deep learning
CN109523552B (en) Three-dimensional object detection method based on viewing cone point cloud
CN108230337A (en) A kind of method that semantic SLAM systems based on mobile terminal are realized
CN105251707B (en) Bad part eject sorting equipment based on medical large transfusion visible foreign matters detecting system
CN100587717C (en) Medical large transfusion machine vision on-line detection method
Wang et al. A study on long-close distance coordination control strategy for litchi picking
CN109784204A (en) A kind of main carpopodium identification of stacking string class fruit for parallel robot and extracting method
CN103080331A (en) Method for detecting microorganisms, device for detecting microorganisms and program
CN104021381B (en) Human movement recognition method based on multistage characteristics
CN108021903A (en) The error calibrating method and device of artificial mark leucocyte based on neutral net
CN110135271A (en) A kind of cell sorting method and device
CN108596038A (en) Erythrocyte Recognition method in the excrement with neural network is cut in a kind of combining form credit
JP2015137857A (en) detection control device, program and detection system
CN106529417A (en) Visual and laser data integrated road detection method
CN113052295B (en) Training method of neural network, object detection method, device and equipment
CN106446785A (en) Passable road detection method based on binocular vision
CN107832801A (en) A kind of cell image classification model building method
CN110097091A (en) It is trained be distributed with inference data it is inconsistent under the conditions of image fine granularity recognition methods
CN106645180A (en) Method for checking defects of substrate glass, field terminal and server
CN112560716A (en) High-resolution remote sensing image water body extraction method based on low-level feature fusion
Hu et al. Recognition and localization of strawberries from 3D binocular cameras for a strawberry picking robot using coupled YOLO/Mask R-CNN
CN103544473B (en) A kind of electronic connector detection method based on machine vision
CN110188592A (en) A kind of urinary formed element cell image disaggregated model construction method and classification method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181016