CN109684942A - A kind of Full-automatic tableware method for sorting of view-based access control model identification - Google Patents
A kind of Full-automatic tableware method for sorting of view-based access control model identification Download PDFInfo
- Publication number
- CN109684942A CN109684942A CN201811498303.5A CN201811498303A CN109684942A CN 109684942 A CN109684942 A CN 109684942A CN 201811498303 A CN201811498303 A CN 201811498303A CN 109684942 A CN109684942 A CN 109684942A
- Authority
- CN
- China
- Prior art keywords
- tableware
- mechanical arm
- sorting
- identification
- full
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000013480 data collection Methods 0.000 claims description 3
- 239000003550 marker Substances 0.000 claims description 3
- 238000001514 detection method Methods 0.000 description 12
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 2
- 238000004064 recycling Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000003756 stirring Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of Full-automatic tableware method for sorting of view-based access control model identification, real-time video information stream intake computer is formed digital picture by video camera first, it will be classified and positioned in YOLOv3 detector after the incoming training of image information, the information of output is transmitted into information by serial ports, control instruction is formed, mechanical arm is allowed to carry out corresponding grasping movement to complete sorting.The present invention improves the degree of automation of tableware sorting.
Description
Technical field
The present invention relates to battery testing field, especially a kind of Full-automatic tableware method for sorting of view-based access control model identification.
Background technique
About tableware sort, it is most of still using manual type in the prior art, after having eaten by bowl, plate,
The tablewares such as chopsticks, spoon it is artificial carry out classification recycling, shifted by transmission device.There are also tableware automatic sortings
Device, but be and to have certain requirement to the material of tableware by magnetic force devices, so that it is excessive to promote cost.
Obviously, existing tableware recycling mode inefficiency, easily causes tableware to damage, and the knowledge of algorithm of target detection
Other accuracy and speed provides favourable conditions to automation tableware sorting.
Summary of the invention
In view of this, being improved the purpose of the present invention is to propose to a kind of Full-automatic tableware method for sorting of view-based access control model identification
The degree of automation of tableware sorting.
The present invention is realized using following scheme: a kind of Full-automatic tableware method for sorting of view-based access control model identification, including following
Step:
Step S1: several tableware pictures that history is passed through on acquisition assembly line, and it is manually marked, form training data
Collection;
Step S2: the training dataset training YOLOv3 object detector of step S1 is utilized;
Step S3: the real-time video information stream on acquisition assembly line, and be inputted host computer and form digital picture, to digitized map
Classification and Identification and zone marker are carried out as being passed in the trained YOLOv3 object detector of step S2 after being pre-processed;
Step S4: control instruction is formed according to the output result of YOLOv3 object detector in step S3, controls the fortune of mechanical arm
It is dynamic that tableware is sorted.
Further, in step S1, the marked content manually marked include every width picture tableware classification information and
Location information.
Further, it in step S1, is labeled to 210 pictures as training data using LabelImg annotation tool
Collection makes the xml document and the corresponding configuration file of setting of VOC format, and data include 5 classes: bowl, cup, plate, spoon and
Saucer;Every figure is converted to the txt file of YOLO format simultaneously.
Further, the control in step S4, to mechanical arm specifically: according to the classification information of identification tableware, determine machine
The target placement location of tool arm;According to the size information of identification tableware, the size that the clamping part of mechanical arm opens is determined;According to knowledge
The location information of other tableware determines the position of mechanical arm starting.
Further, the location information according to identification tableware, under determining that the calculating of the position of mechanical arm starting uses
Formula:
Position=target placement location-(initial position+mechanical arm traveling time * conveyer belt of identification tableware of mechanical arm starting
Speed) * precision trims.
Compared with prior art, the invention has the following beneficial effects: the present invention utilizes machine recognition, does not need manual intervention
Sorting, does not also require the material of tableware, has the characteristics that high degree of automation, not affected by environment, low in cost.
Detailed description of the invention
Fig. 1 is the schematic illustration of the embodiment of the present invention.
Fig. 2 is Darknet-53 schematic network structure in the embodiment of the present invention.
Fig. 3 is detection effect figure in the embodiment of the present invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
It is noted that described further below be all exemplary, it is intended to provide further instruction to the application.Unless another
It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field
The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root
According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular
Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet
Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
As shown in Figure 1, present embodiments providing a kind of Full-automatic tableware method for sorting of view-based access control model identification, including following
Step:
Step S1: several tableware pictures that history is passed through on acquisition assembly line, and it is manually marked, form training data
Collection;
Step S2: the training dataset training YOLOv3 object detector of step S1 is utilized;
Step S3: the real-time video information stream on acquisition assembly line, and be inputted host computer and form digital picture, to digitized map
Classification and Identification and zone marker are carried out as being passed in the trained YOLOv3 object detector of step S2 after being pre-processed;
Step S4: control instruction is formed according to the output result of YOLOv3 object detector in step S3, controls the fortune of mechanical arm
It is dynamic that tableware is sorted.
Further, in step S1, the marked content manually marked include every width picture tableware classification information and
Location information.
Further, it in step S1, is labeled to 210 pictures as training data using LabelImg annotation tool
Collection makes the xml document and the corresponding configuration file of setting of VOC format, and data include 5 classes: bowl, cup, plate, spoon and
Saucer;Every figure is converted to the txt file of YOLO format simultaneously.
Further, the control in step S4, to mechanical arm specifically: according to the classification information of identification tableware, determine machine
The target placement location of tool arm;According to the size information of identification tableware, the size that the clamping part of mechanical arm opens is determined;According to knowledge
The location information of other tableware determines the position of mechanical arm starting.
Further, the location information according to identification tableware, under determining that the calculating of the position of mechanical arm starting uses
Formula:
Position=target placement location-(initial position+mechanical arm traveling time * conveyer belt of identification tableware of mechanical arm starting
Speed) * precision trims.
Wherein, the precision trims according to actual demand and determines.
Preferably, the acquisition about image and video is mainly the testee that will be shot by video camera in the present embodiment
Picture signal be converted into the digital signal that computer can identify.Mutually auxiliary lighting device is also needed with this.Due to illumination
It is affected to image information, so guaranteeing that a sufficient light environment is extremely important to the actual effect of this method.This
The common wireline camera that embodiment uses notebook or desktop computer to use, 12,000,000 pixels, free drive is dynamic, only one USB
Plug, plug and play, the application scenarios of the present embodiment do not need to shoot at a distance yet, and the industrial camera compared to profession comes
It says, the camera cost performance is relatively high, can be with save the cost.And the video camera has a small night lamp to carry on the back in camera
Face, can stir opening, and light filling uses.The present embodiment fixes video camera with tripod, is connected on computer by USB data line
Transmit real-time video information stream.
In the present embodiment, the real-time video information stream that video acquisition module is transmitted is captured with program, is based on
OpenCV realizes that video frame changes into picture.The present embodiment is identified using algorithm of target detection YOLOv3, is provided every in picture
Classification, position and the size information of one tableware, pass information to mechanical arm by serial ports.Wherein, the mechanical arm can be with
Using the tooling realization with clamping in the prior art that can be realized three degree of freedom and rotate.
Particularly, in this embodiment, YOLOv3 is the modified version of YOLO, and YOLO full name is You Only Look
Once:Unified, Real-Time Object Detection are Joseph Redmon and Ali Farhadi et al. in
The object detection system based on single Neural proposed in 2015.On CVPR in 2017, Joseph Redmon and Ali
The YOLOv2 that Farhadi is delivered again further improves the accuracy and speed of detection.In March, 2018, this is not " near having
The famous popular target detection model YOLO of friend " releases completely new YOLOv3 editions, and new version improves accuracy and speed again, in reality
When existing similar performance, YOLOv3 ratio SSD speed improves 3 times, improves nearly 4 times than RetinaNet speed.
YOLO handles object detection task as " recurrence " problem, using a neural network, directly from one
Whole image come predict the coordinate of object frame, in frame the confidence level comprising object and object probability value.Because of YOLO
Target detection process be to be completed in a neural network, it is possible to come in the form of end to end optimization aim detection property
Energy.By YOLO, every image only needs to obtain the position for having which object He these objects in image at a glance.Make
It is as follows come the process of detection object with YOLO:
1, image size is adjusted to the input collectively as neural network;
2, neural network is run, the confidence level and class probability in the coordinate frame of some frames comprising object are obtained;
3, non-maxima suppression is carried out, bounding box is screened.
In the present embodiment, the improvement of YOLOv3 is mainly in the following:
1, multistage prediction: YOLOv3 is that YOLO increases multistage prediction, and it is thick to solve YOLO granularity, is asked Small object inability
Topic.
2, loss function is different: the Softmax Loss of YOLOv2 has been substituted for Logistic Loss by YOLOv3, works as chance
When to composite label, Softmax is difficult preferably to model data, and carrying out classification using Logistic Loss can more have
Effect.
3, deepen network: using simplified residual block instead of original network structure form.Network structure is by YOLOv2
Darknet-19 become the Darknet-53 of YOLOv3, increase up-sampling layer.Darknet-53 network structure such as Fig. 2 institute
Show.
To sum up, YOLOv3 is a good detector, and speed is quickly, also very accurate.
Specifically, the operating system of computer used in the present embodiment is Ubuntu16.04, video card 1050Ti.It is first
Real-time video information stream intake computer is formed digital picture by first video camera, and the YOLOv3 after the incoming training of image information is examined
It surveys in device and is classified and positioned, the information of output is transmitted into information by serial ports, control instruction is formed, mechanical arm is allowed to carry out phase
The grasping movement answered completes sorting.Final the present embodiment realizes the Full-automatic tableware method for sorting of view-based access control model identification.Such as
Shown in Fig. 3, the top of Fig. 3 is shooting original image, and the following figure is testing result figure.It can be seen that by the data set training of tableware,
YOLOv3 detector has had been provided with the ability of detection tableware, either in classification, or on size position, all shows to obtain ratio
Preferably.
The foregoing is merely presently preferred embodiments of the present invention, all equivalent changes done according to scope of the present invention patent with
Modification, is all covered by the present invention.
Claims (5)
1. a kind of Full-automatic tableware method for sorting of view-based access control model identification, it is characterised in that: the following steps are included:
Step S1: several tableware pictures that history is passed through on acquisition assembly line, and it is manually marked, form training data
Collection;
Step S2: the training dataset training YOLOv3 object detector of step S1 is utilized;
Step S3: the real-time video information stream on acquisition assembly line, and be inputted host computer and form digital picture, to digitized map
Classification and Identification and zone marker are carried out as being passed in the trained YOLOv3 object detector of step S2 after being pre-processed;
Step S4: control instruction is formed according to the output result of YOLOv3 object detector in step S3, controls the fortune of mechanical arm
It is dynamic that tableware is sorted.
2. a kind of Full-automatic tableware method for sorting of view-based access control model identification according to claim 1, it is characterised in that: step
In S1, the marked content manually marked includes the tableware classification information and location information of every width picture.
3. a kind of Full-automatic tableware method for sorting of view-based access control model identification according to claim 1, it is characterised in that: step
In S1, the xml text that VOC format is made as training dataset is labeled to 210 pictures using LabelImg annotation tool
Part and the corresponding configuration file of setting, data include 5 classes: bowl, cup, plate, spoon and saucer;Every figure is converted simultaneously
For the txt file of YOLO format.
4. a kind of Full-automatic tableware method for sorting of view-based access control model identification according to claim 1, it is characterised in that: step
Control in S4, to mechanical arm specifically: according to the classification information of identification tableware, determine the target placement location of mechanical arm;Root
According to the size information of identification tableware, the size that the clamping part of mechanical arm opens is determined;According to the location information of identification tableware, determine
The position of mechanical arm starting.
5. a kind of Full-automatic tableware method for sorting of view-based access control model identification according to claim 4, it is characterised in that: described
According to the location information of identification tableware, determine that the calculating of the position of mechanical arm starting uses following formula:
Position=target placement location-(initial position+mechanical arm traveling time * conveyer belt of identification tableware of mechanical arm starting
Speed) * precision trims.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811498303.5A CN109684942A (en) | 2018-12-08 | 2018-12-08 | A kind of Full-automatic tableware method for sorting of view-based access control model identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811498303.5A CN109684942A (en) | 2018-12-08 | 2018-12-08 | A kind of Full-automatic tableware method for sorting of view-based access control model identification |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109684942A true CN109684942A (en) | 2019-04-26 |
Family
ID=66187166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811498303.5A Pending CN109684942A (en) | 2018-12-08 | 2018-12-08 | A kind of Full-automatic tableware method for sorting of view-based access control model identification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109684942A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111601087A (en) * | 2020-05-25 | 2020-08-28 | 广东智源机器人科技有限公司 | Visual inspection equipment and processing apparatus of tableware |
CN112170233A (en) * | 2020-09-01 | 2021-01-05 | 燕山大学 | Small part sorting method and system based on deep learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018076776A1 (en) * | 2016-10-25 | 2018-05-03 | 深圳光启合众科技有限公司 | Robot, robotic arm and control method and device thereof |
CN108509860A (en) * | 2018-03-09 | 2018-09-07 | 西安电子科技大学 | HOh Xil Tibetan antelope detection method based on convolutional neural networks |
CN108875669A (en) * | 2018-06-28 | 2018-11-23 | 武汉市哈哈便利科技有限公司 | A kind of commodity identification technology merged based on visible light with infrared image |
-
2018
- 2018-12-08 CN CN201811498303.5A patent/CN109684942A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018076776A1 (en) * | 2016-10-25 | 2018-05-03 | 深圳光启合众科技有限公司 | Robot, robotic arm and control method and device thereof |
CN108509860A (en) * | 2018-03-09 | 2018-09-07 | 西安电子科技大学 | HOh Xil Tibetan antelope detection method based on convolutional neural networks |
CN108875669A (en) * | 2018-06-28 | 2018-11-23 | 武汉市哈哈便利科技有限公司 | A kind of commodity identification technology merged based on visible light with infrared image |
Non-Patent Citations (2)
Title |
---|
张文勇等: "基于LabVIEW机器视觉的餐具分拣系统", 《计算机科学》 * |
袁利毫,昝英飞,钟声华,祝海涛: "基于YOLOv3的水下小目标自主识别", 《海洋工程装备与技术》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111601087A (en) * | 2020-05-25 | 2020-08-28 | 广东智源机器人科技有限公司 | Visual inspection equipment and processing apparatus of tableware |
CN112170233A (en) * | 2020-09-01 | 2021-01-05 | 燕山大学 | Small part sorting method and system based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xu et al. | Light-YOLOv3: fast method for detecting green mangoes in complex scenes using picking robots | |
CN110390691A (en) | A kind of ore scale measurement method and application system based on deep learning | |
CN109117836B (en) | Method and device for detecting and positioning characters in natural scene based on focus loss function | |
CN108564094B (en) | Material identification method based on combination of convolutional neural network and classifier | |
CN105574550A (en) | Vehicle identification method and device | |
CN111768365B (en) | Solar cell defect detection method based on convolution neural network multi-feature fusion | |
CN110460782A (en) | Information collecting device, method, crusing robot and storage medium | |
WO2019114380A1 (en) | Wood board identification method, machine learning method and device for wood board identification, and electronic device | |
CN112926405A (en) | Method, system, equipment and storage medium for detecting wearing of safety helmet | |
CN105654066A (en) | Vehicle identification method and device | |
CN102385592B (en) | Image concept detection method and device | |
CN113128335B (en) | Method, system and application for detecting, classifying and finding micro-living ancient fossil image | |
CN104881675A (en) | Video scene identification method and apparatus | |
CN110598693A (en) | Ship plate identification method based on fast-RCNN | |
CN101441721A (en) | Device and method for counting overlapped circular particulate matter | |
CN108764018A (en) | A kind of multitask vehicle based on convolutional neural networks recognition methods and device again | |
CN109684942A (en) | A kind of Full-automatic tableware method for sorting of view-based access control model identification | |
CN111340022A (en) | Identity card information identification method and device, computer equipment and storage medium | |
CN110059539A (en) | A kind of natural scene text position detection method based on image segmentation | |
CN110688955A (en) | Building construction target detection method based on YOLO neural network | |
CN112232327A (en) | Anti-nuclear antibody karyotype interpretation method and device based on deep learning | |
CN109740656A (en) | A kind of ore method for separating based on convolutional neural networks | |
CN108133235A (en) | A kind of pedestrian detection method based on neural network Analysis On Multi-scale Features figure | |
CN114359552A (en) | Instrument image identification method based on inspection robot | |
CN109614994A (en) | A kind of tile typology recognition methods and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190426 |
|
RJ01 | Rejection of invention patent application after publication |