CN108846415A - The Target Identification Unit and method of industrial sorting machine people - Google Patents
The Target Identification Unit and method of industrial sorting machine people Download PDFInfo
- Publication number
- CN108846415A CN108846415A CN201810496518.7A CN201810496518A CN108846415A CN 108846415 A CN108846415 A CN 108846415A CN 201810496518 A CN201810496518 A CN 201810496518A CN 108846415 A CN108846415 A CN 108846415A
- Authority
- CN
- China
- Prior art keywords
- network
- layer
- training
- target
- sorting machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to the Target Identification Unit and method of a kind of sorting machine people, it further includes the program for executing following steps which, which includes camera, processor and memory,:The pictorial information of object to be measured is obtained by camera;Load convolutional neural networks frame and its training pattern;Using the anchorboxes of several candidates, bounding box is generated in characteristic pattern;Predict the corresponding classification of bounding box and classification score;The maximum bounding box of classification score is obtained by non-maxima suppression.The program that the training pattern of the load convolutional neural networks is the steps of:Carry out the pre-training of network;Obtain the label data collection of target object;Clustering is carried out to the target frame that label data is concentrated;The network output that each layer is obtained by propagated forward, obtains the error of output with label;According to the gradient of error retrospectively calculate each layer weight and biasing, and adjust weight and the biasing of each layer.Present invention detection speed is very fast, and while guaranteeing precision, real-time may be implemented.
Description
Technical field
The present invention relates to Target Identification Unit and the sides of a kind of industrial sorting machine people, more particularly to industrial sorting machine people
Method.
Background technique
Nowadays, automated production is more more and more universal, and robot is widely applied on various industrial flow-lines, and
Workpiece mechanical sorting operation is a common task in industrial flow.It is sorting operation to identifying and positioning for moving target
Basis, the method that the sorting machine National People's Congress of early stage mostly uses teaching to program, though the operation of achievable some fixations, is unable to complete
More intelligentized operation;And general machine vision technique, such as simple edge extracting, template matching, image enhancement etc.
Processing technique is difficult in complex scene accurately to sort target object although being capable of detecting when target object.
Convolutional neural networks (CNN) are mainly used to the X-Y scheme of identification distortion invariance, it is to simulate animal brain
A kind of mathematical model, as long as being trained with known mode to convolutional network, it can learn largely input with it is defeated
Mapping relations between out, with the mapping ability between inputoutput pair.Convolutional neural networks algorithm is used for sorting machine
People, when chaff interferent is more, density is larger in scene, needs although being able to solve the demand of the sorting precision in complex scene
Multiple target objects are handled simultaneously, and be easy to be influenced by factors such as illumination, the speed of target identification and positioning
It is relatively slow, it is easy to produce missing inspection and erroneous detection, requirement when being unable to satisfy the handgrip sorting of machine end for real-time.
This is the deficiencies in the prior art place.
Summary of the invention
In view of the deficiencies of the prior art, the technical problem to be solved in the present invention is to provide the mesh of industrial sorting machine people a kind of
Other device and method are identified, they may help to realize the real-time detection to target object.
The Target Identification Unit of industrial sorting machine people of the invention, including camera, processor and memory, feature
It is:The device further includes the program for executing following steps:
The pictorial information of object to be measured is obtained by camera;
Convolutional neural networks frame and its training pattern are loaded, the anchor boxes of several candidates is obtained;
Using anchor boxes, several bounding boxes are generated in characteristic pattern;
Corresponding classification and classification score are predicted to above-mentioned several bounding boxes;
The maximum bounding box of classification score, classification of the corresponding classification as target are obtained by non-maxima suppression.
The program that the training pattern of the load convolutional neural networks is the steps of:
The pre-training that network is carried out first on ImageNet data set, obtains the pre-training model of network;
Then the data set that target object is obtained by camera, is manually marked, and label data collection is obtained;
To the width and high pass k-means progress clustering of the target frame that label data is concentrated, obtain several groups of candidates'
anchor boxes;
The network output that each layer is obtained by propagated forward, obtains the error of output with label;
The gradient of each layer weight and biasing is reversely successively calculated according to error, and adjusts weight and the biasing of each layer.
The k=2.
It in pre-training, is trained using multiple dimensioned, changes the size of input picture every 10 wheels.
In pre-training, using 32 as the down-sampling factor, it is maintained at the size for inputting picture between 320 to 608.
The convolutional neural networks are improvement on the basis of Darkne-19, eliminate the last one convolutional layer, are increased
3 convolution kernel sizes are 3x3, the convolutional layer that the convolution kernel size of convolutional layer that port number is 1024 and one is 1x1.
The target identification method of industrial sorting machine people of the invention, it is characterized in that including the following steps:
The pictorial information of object to be measured is obtained by camera;
Convolutional neural networks frame and its training pattern are loaded, the anchor boxes of several candidates is obtained;
Using anchor boxes, several bounding boxes are generated in characteristic pattern;
Corresponding classification and classification score are predicted to above-mentioned several bounding boxes;
The maximum bounding box of classification score, classification of the corresponding classification as target are obtained by non-maxima suppression.
The training pattern of the load convolutional neural networks is the steps of:
The pre-training that network is carried out first on ImageNet data set, obtains the pre-training model of network;
Then the data set that target object is obtained by camera, is manually marked, and label data collection is obtained;
To the width and high pass k-means progress clustering of the target frame that label data is concentrated, obtain several groups of candidates'
anchor boxes;
The network output that each layer is obtained by propagated forward, obtains the error of output with label;
The gradient of each layer weight and biasing is reversely successively calculated according to error, and adjusts weight and the biasing of each layer.
The beneficial effects of the invention are as follows:First, the model trained using convolutional neural networks can be right in complex scene
It is identified and positioned in target, due to selecting picture in its entirety to carry out training pattern, the approach of entire target detection is one single
Convolutional neural networks, detection performance can be optimized end to end, therefore detect that speed is very fast, guarantee the same of precision
When, the requirement of real-time may be implemented.Second, convolutional neural networks of the invention are on the basis of basic network Draknet-19
The improvement of upper progress, being added to 3 convolution kernels is 3*3, the convolutional layer of channel numerical digit 1024, the volume that 1 convolution kernel size is 1*1
Lamination reduces the training parameter of network while deepening network, enables the network to extract characteristic information more abundant, mention
The precision that high network identifies target object.Third, due to carrying out multiple dimensioned training to network, network is to various sizes of defeated
Enter image with robustness.Finally, finding the system of bounding box by concentrating the bounding box marked by hand to do clustering data
Meter rule, finds as k=2, has preferable sorting effect.
Detailed description of the invention
Fig. 1 is the flow chart of the target detection of industrial sorting machine people of the invention.
Fig. 2 is the convolutional neural networks training stage flow chart of industrial sorting machine people of the invention.
Fig. 3 is the network structure table of the convolutional neural networks of industrial sorting machine people of the invention.
Fig. 4 is the number of iterations of convolutional neural networks training of the invention and the relational graph of loss function.
Fig. 5 is the k value and cost function plots of industrial sorting machine people of the invention.
Specific embodiment
Now in conjunction with drawings and examples, invention is further described in detail.
The present invention will apply to improved YOLOv2 convolutional neural networks algorithm in the sorting machine people of processing part, real
The sort operation based on machine vision is showed.The system obtains pictorial information by the camera on conveyer belt, from different model
Part in identify and position out target part in real time, and grabbed by the handgrip of mechanical end, realize neat pendulum
It puts.
Referring to Fig. 1, Fig. 2, YOLOv2 convolutional neural networks algorithm is located target detection problems as a regression problem
Reason, can disposably predict position and the classification of multiple target frames in real time, be completed at the same time target object in one network
Positioning and classification, entire detection process is monolithic.
Due to generating bounding box by grid, recall rate is lower, and YOLOv2 leads to k-means method and concentrates mark by hand to data
The bounding box of note does clustering, finds the statistical law of bounding box, with these types of wide and high anchor boxes in characteristic pattern
Middle generation bounding box, improves the accuracy of recall rate and positioning.
For a target object there may be multiple bounding boxes, each bounding box include x, y, h, w and confidence score and
The variables such as classification score;Wherein x, y indicate that the centre coordinate of the bounding box relative to coordinate origin, w, h are indicated relative to whole picture
The width and height of the bounding box of image;Confidence score has reacted the accuracy of bounding box prediction;
Need to leave the highest bounding box of confidence score by non-maxima suppression, to realize to the accurate fixed of object
Position;
It is input in softmax classifier by the characteristic information for extracting convolutional layer, each bounding box is predicted
The class probability of multiple objects out leaves the highest prediction of class probability score also by non-maxima suppression, with realization pair
The classification of object;
YOLO v2 convolutional neural networks algorithm is used for the sorting of machined part, due to there is no selection sliding window
Or the mode training network of candidate region is extracted, but directly picture in its entirety is selected to come training pattern, the way of entire target detection
Diameter is a single convolutional neural networks, and detection performance can be optimized end to end, therefore detection speed is very fast,
While guaranteeing precision, the requirement of real-time may be implemented.
Since label data is less, and often, resolution ratio is lower, if the label data collection for directlying adopt production carries out net
The training of network, often precision is not high, and positioning is poor, therefore in actual training, using the method for pre-training.ImageNet
Data set has more than 1,400 ten thousand width pictures, covers a classification more than 20,000, and image clearly, resolution ratio is higher, and most pictures is with bright
Therefore true classification markup information carries out pre-training on ImageNet data set, obtain the pre-training model of network, make net
Network obtains the generally understanding to target object;
Then the data set that target object is obtained by camera, is manually marked, and label data collection is obtained;Such as it adopts
The nut and shim data collection for the 1000 manual labels collected with us carry out retraining to network, then to label data collection
Width and high clustering carried out using k-means optimal k value, such as Fig. 5 is obtained by the relationship between k value and cost function
Shown in, when k=2, sorting effect is preferable.Then clustering is carried out using optimal k value, obtains several groups of anchor boxes.
The training stage of network is broadly divided into propagated forward and backpropagation, and propagated forward mainly successively calculates each layer
Output valve, backpropagation are mainly the gradient that foundation error reversely successively calculates each layer weight and biasing, and after calculating,
Adjust weight and the biasing of each layer.It is exported in the training process of network by the network that propagated forward obtains each layer first,
Then the error for obtaining output with label, by backpropagation, continuous undated parameter reduces loss function, until network is complete
Convergence.By visualizing to the loss function in convolutional neural networks training process, whether analysis model training restrains, essence
Whether degree reaches preset requirement.
Referring to Fig. 4, when trained the number of iterations reaches 40000 times, training pattern has restrained, at this time average loss letter
Number reaches 1.0.The model trained can accurately identify and position target object in complex scene.
The model trained has reached the speed of 20 frame per second on a moving belt, meets requirement of real-time.Pass through individual
The test of picture is seen, for the picture containing target object arbitrarily inputted, the model trained by convolutional neural networks,
Target object can be detected in complex scene, and can accurately carry out classification and location prediction.
In order to make network that there is certain robustness to the pictures of different input sizes, using the method for multiple dimensioned training,
Every training 10 is taken turns, and the size of input picture is just changed.In the training process, using 32 as the down-sampling factor, make to input picture
Size is maintained between 320 to 608.
Referring to Fig. 3, network structure of the invention is realized using convolutional neural networks (CNN), and CNN simulates spy by convolution
Sign is distinguished, and the weight for passing through convolution is shared and pond, to reduce the order of magnitude of network parameter, finally by traditional neural net
Network completes the tasks such as classification.Basic model Darkne-19 based on YOLO has 19 convolutional layers, and first layer is convolutional layer, leads to
Road number is 32, and convolution kernel size is 3x3;The second layer is maximum pond layer, and convolution kernel size is 2x2, step-length 2;Third layer is
Convolutional layer, port number 64, convolution kernel size are 3x3;4th layer is maximum pond layer, and convolution kernel size is 2x2, step-length 2;
Five, the six, seven layers are convolutional layer, and convolution kernel size is respectively 3x3,1x1,3x3, and port number is respectively 128,64,128;8th
Layer is maximum pond layer, and convolution kernel size is 2x2, step-length 2;Nine, the ten, 11 be convolutional layer, and convolution kernel size is respectively
3x3,1x1,3x3, port number are respectively 256,128,256;Floor 12 is great Chiization layer, and convolution kernel size is 2x2, step-length
It is 2;13rd, 14,15,16,17 layer is convolutional layer, and convolution kernel size is respectively 3x3,1x1,3x1,1x1,3x3,
Port number is 512,256,512,256,512;18th layer is maximum pond layer, and convolution kernel size is 2x2, step-length 2;Tenth
Nine, 20,21,22,23 layers are convolutional layer, and convolution kernel size is respectively 3x3,1x1,3x3,1x1,3x3, are led to
Road number is respectively 1024,512,1024,512,1024;24th layer is convolutional layer, and convolution kernel size is 1x1, and port number is
1000.The cascade structure of convolution is largely used in Darkne-19, the size of convolution kernel mainly includes 3 × 3 and 1 × 1 two kinds big
Small convolution kernel.The thought for having used for reference Network in network is all added to the convolution of 1x1 between the convolution kernel of 3x3
Core, wherein convolutional layer is responsible for extracting the characteristic information of target object, and maximum pond layer carries out the key message in target object
It extracts, reduces redundancy, reduce the parameter of network training.
It is improved on the basis of Darkne-19 herein, eliminates the last one convolutional layer, increase 3 convolution kernels
Size is 3x3, the convolutional layer that the convolutional layer and a convolution kernel size that port number is 1024 are 1x1.While deepening network,
The training parameter for reducing network enables the network to extract characteristic information more abundant, improves network and knows to target object
Other precision.Using anchor boxes on characteristic pattern predicted boundary frame.
The above is only presently preferred embodiments of the present invention, not does limitation in any form to the present invention, though
So the present invention has been disclosed as a preferred embodiment, and however, it is not intended to limit the invention, any technology people for being familiar with this profession
Member, in the range of not departing from technical solution of the present invention, when the technology contents using the disclosure above make a little change or repair
Decorations are the equivalent embodiment of equivalent variations, but anything that does not depart from the technical scheme of the invention content, technology according to the present invention are real
Matter any simple modification, equivalent change and modification to the above embodiments, still fall within the range of technical solution of the present invention
It is interior.
Claims (8)
1. a kind of Target Identification Unit of industry sorting machine people, including camera, processor and memory, it is characterized in that:It should
Device further includes the program for executing following steps:
The pictorial information of object to be measured is obtained by camera;
Convolutional neural networks frame and its training pattern are loaded, the anchor boxes of several candidates is obtained;
Using anchor boxes, several bounding boxes are generated in characteristic pattern;
Corresponding classification and classification score are predicted to above-mentioned several bounding boxes;
The maximum bounding box of classification score, classification of the corresponding classification as target are obtained by non-maxima suppression.
2. the Target Identification Unit of industry sorting machine people as described in claim 1, it is characterized in that:The load convolutional Neural
The program that the training pattern of network is the steps of:
The pre-training that network is carried out first on ImageNet data set, obtains the pre-training model of network;
Then the data set that target object is obtained by camera, is manually marked, and label data collection is obtained;
To the width and high pass k-means progress clustering of the target frame that label data is concentrated, obtain several groups of candidates'
anchor boxes;
The network output that each layer is obtained by propagated forward, obtains the error of output with label;
The gradient of each layer weight and biasing is reversely successively calculated according to error, and adjusts weight and the biasing of each layer.
3. the Target Identification Unit of industry sorting machine people as claimed in claim 2, it is characterized in that:The k=2.
4. the Target Identification Unit of industry sorting machine people as claimed in claim 2, it is characterized in that:In pre-training, use
It is multiple dimensioned to be trained, change the size of input picture every 10 wheels.
5. the Target Identification Unit of industry sorting machine people as claimed in claim 4, it is characterized in that:In pre-training, with 32
As the down-sampling factor, it is maintained at the size for inputting picture between 320 to 608.
6. the Target Identification Unit of the industrial sorting machine people as described in one of Claims 1 to 5, it is characterized in that:The convolution
Neural network is improvement on the basis of Darkne-19, eliminates the last one convolutional layer, increases 3 convolution kernel sizes
For 3x3, the convolutional layer that the convolutional layer and a convolution kernel size that port number is 1024 are 1x1.
7. a kind of target identification method of industry sorting machine people, it is characterized in that including the following steps:
The pictorial information of object to be measured is obtained by camera;
Convolutional neural networks frame and its training pattern are loaded, the anchor boxes of several candidates is obtained;
Using anchor boxes, several bounding boxes are generated in characteristic pattern;
Corresponding classification and classification score are predicted to above-mentioned several bounding boxes;
The maximum bounding box of classification score, classification of the corresponding classification as target are obtained by non-maxima suppression.
8. the target identification method of industry sorting machine people as claimed in claim 7, it is characterized in that:The load convolutional Neural
The training pattern of network is the steps of:
The pre-training that network is carried out first on ImageNet data set, obtains the pre-training model of network;
Then the data set that target object is obtained by camera, is manually marked, and label data collection is obtained;
To the width and high pass k-means progress clustering of the target frame that label data is concentrated, obtain several groups of candidates'
anchor boxes;
The network output that each layer is obtained by propagated forward, obtains the error of output with label;
The gradient of each layer weight and biasing is reversely successively calculated according to error, and adjusts weight and the biasing of each layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810496518.7A CN108846415A (en) | 2018-05-22 | 2018-05-22 | The Target Identification Unit and method of industrial sorting machine people |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810496518.7A CN108846415A (en) | 2018-05-22 | 2018-05-22 | The Target Identification Unit and method of industrial sorting machine people |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108846415A true CN108846415A (en) | 2018-11-20 |
Family
ID=64213210
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810496518.7A Pending CN108846415A (en) | 2018-05-22 | 2018-05-22 | The Target Identification Unit and method of industrial sorting machine people |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108846415A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740681A (en) * | 2019-01-08 | 2019-05-10 | 南方科技大学 | Fruit sorting method, device, system, terminal and storage medium |
CN110135486A (en) * | 2019-05-08 | 2019-08-16 | 西安电子科技大学 | Chopsticks image classification method based on adaptive convolutional neural networks |
CN110210568A (en) * | 2019-06-06 | 2019-09-06 | 中国民用航空飞行学院 | The recognition methods of aircraft trailing vortex and system based on convolutional neural networks |
CN111401215A (en) * | 2020-03-12 | 2020-07-10 | 杭州涂鸦信息技术有限公司 | Method and system for detecting multi-class targets |
CN111429418A (en) * | 2020-03-19 | 2020-07-17 | 天津理工大学 | Industrial part detection method based on YO L O v3 neural network |
CN111639721A (en) * | 2020-06-12 | 2020-09-08 | 江苏斯诺物联科技有限公司 | Intelligent perception robot based on logistics transportation application |
CN111783797A (en) * | 2020-06-30 | 2020-10-16 | 杭州海康威视数字技术股份有限公司 | Target detection method, device and storage medium |
CN113305848A (en) * | 2021-06-11 | 2021-08-27 | 哈尔滨工业大学 | Real-time capture detection method based on YOLO v2 network |
CN113673488A (en) * | 2021-10-21 | 2021-11-19 | 季华实验室 | Target detection method and device based on few samples and intelligent object sorting system |
CN114871115A (en) * | 2022-04-28 | 2022-08-09 | 五邑大学 | Object sorting method, device, equipment and storage medium |
CN114887927A (en) * | 2022-05-10 | 2022-08-12 | 浙江工业大学 | Automatic conveying quality detection and sorting system based on industrial robot |
TWI815318B (en) * | 2022-02-23 | 2023-09-11 | 國立臺北科技大學 | Warehousing automatic sorting system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106022237A (en) * | 2016-05-13 | 2016-10-12 | 电子科技大学 | Pedestrian detection method based on end-to-end convolutional neural network |
CN107358149A (en) * | 2017-05-27 | 2017-11-17 | 深圳市深网视界科技有限公司 | A kind of human body attitude detection method and device |
CN107862694A (en) * | 2017-12-19 | 2018-03-30 | 济南大象信息技术有限公司 | A kind of hand-foot-and-mouth disease detecting system based on deep learning |
-
2018
- 2018-05-22 CN CN201810496518.7A patent/CN108846415A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106022237A (en) * | 2016-05-13 | 2016-10-12 | 电子科技大学 | Pedestrian detection method based on end-to-end convolutional neural network |
CN107358149A (en) * | 2017-05-27 | 2017-11-17 | 深圳市深网视界科技有限公司 | A kind of human body attitude detection method and device |
CN107862694A (en) * | 2017-12-19 | 2018-03-30 | 济南大象信息技术有限公司 | A kind of hand-foot-and-mouth disease detecting system based on deep learning |
Non-Patent Citations (1)
Title |
---|
魏湧明 等: "基于YOLOv2的无人机航拍图像定位研究", 《激光与光电子学进展》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740681A (en) * | 2019-01-08 | 2019-05-10 | 南方科技大学 | Fruit sorting method, device, system, terminal and storage medium |
CN110135486B (en) * | 2019-05-08 | 2023-01-24 | 西安电子科技大学 | Chopstick image classification method based on adaptive convolutional neural network |
CN110135486A (en) * | 2019-05-08 | 2019-08-16 | 西安电子科技大学 | Chopsticks image classification method based on adaptive convolutional neural networks |
CN110210568A (en) * | 2019-06-06 | 2019-09-06 | 中国民用航空飞行学院 | The recognition methods of aircraft trailing vortex and system based on convolutional neural networks |
CN111401215A (en) * | 2020-03-12 | 2020-07-10 | 杭州涂鸦信息技术有限公司 | Method and system for detecting multi-class targets |
CN111401215B (en) * | 2020-03-12 | 2023-10-31 | 杭州涂鸦信息技术有限公司 | Multi-class target detection method and system |
CN111429418A (en) * | 2020-03-19 | 2020-07-17 | 天津理工大学 | Industrial part detection method based on YO L O v3 neural network |
CN111639721A (en) * | 2020-06-12 | 2020-09-08 | 江苏斯诺物联科技有限公司 | Intelligent perception robot based on logistics transportation application |
CN111783797B (en) * | 2020-06-30 | 2023-08-18 | 杭州海康威视数字技术股份有限公司 | Target detection method, device and storage medium |
CN111783797A (en) * | 2020-06-30 | 2020-10-16 | 杭州海康威视数字技术股份有限公司 | Target detection method, device and storage medium |
CN113305848A (en) * | 2021-06-11 | 2021-08-27 | 哈尔滨工业大学 | Real-time capture detection method based on YOLO v2 network |
CN113673488A (en) * | 2021-10-21 | 2021-11-19 | 季华实验室 | Target detection method and device based on few samples and intelligent object sorting system |
TWI815318B (en) * | 2022-02-23 | 2023-09-11 | 國立臺北科技大學 | Warehousing automatic sorting system |
CN114871115A (en) * | 2022-04-28 | 2022-08-09 | 五邑大学 | Object sorting method, device, equipment and storage medium |
CN114871115B (en) * | 2022-04-28 | 2024-07-05 | 五邑大学 | Object sorting method, device, equipment and storage medium |
CN114887927A (en) * | 2022-05-10 | 2022-08-12 | 浙江工业大学 | Automatic conveying quality detection and sorting system based on industrial robot |
CN114887927B (en) * | 2022-05-10 | 2024-02-13 | 浙江工业大学 | Automatic conveying quality detection sorting system based on industrial robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108846415A (en) | The Target Identification Unit and method of industrial sorting machine people | |
CN108876765A (en) | The target locating set and method of industrial sorting machine people | |
CN111062915B (en) | Real-time steel pipe defect detection method based on improved YOLOv3 model | |
CN105447473B (en) | A kind of any attitude facial expression recognizing method based on PCANet-CNN | |
Yu et al. | A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation | |
CN105005787B (en) | A kind of material sorting technique of the joint sparse coding based on Dextrous Hand tactile data | |
Tu et al. | An accurate and real-time surface defects detection method for sawn lumber | |
Liu et al. | Recognition methods for coal and coal gangue based on deep learning | |
Wu et al. | Solder joint recognition using mask R-CNN method | |
CN110314854A (en) | A kind of device and method of the workpiece sensing sorting of view-based access control model robot | |
CN108280856A (en) | The unknown object that network model is inputted based on mixed information captures position and orientation estimation method | |
CN107316058A (en) | Improve the method for target detection performance by improving target classification and positional accuracy | |
CN108520273A (en) | A kind of quick detection recognition method of dense small item based on target detection | |
CN109919934A (en) | A kind of liquid crystal display panel defect inspection method based on the study of multi-source domain depth migration | |
CN109544522A (en) | A kind of Surface Defects in Steel Plate detection method and system | |
CN110633738B (en) | Rapid classification method for industrial part images | |
CN110490842A (en) | A kind of steel strip surface defect detection method based on deep learning | |
CN114581782B (en) | Fine defect detection method based on coarse-to-fine detection strategy | |
CN110909660A (en) | Plastic bottle detection and positioning method based on target detection | |
CN110334594A (en) | A kind of object detection method based on batch again YOLO algorithm of standardization processing | |
CN113222982A (en) | Wafer surface defect detection method and system based on improved YOLO network | |
CN111275684A (en) | Strip steel surface defect detection method based on multi-scale feature extraction | |
CN108133235A (en) | A kind of pedestrian detection method based on neural network Analysis On Multi-scale Features figure | |
CN114596273B (en) | Intelligent detection method for multiple defects of ceramic substrate by using YOLOV4 network | |
CN117911350A (en) | PCB surface defect detection method based on improvement YOLOv-tiny |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181120 |
|
RJ01 | Rejection of invention patent application after publication |