CN113680695A - Robot-based machine vision garbage sorting system - Google Patents
Robot-based machine vision garbage sorting system Download PDFInfo
- Publication number
- CN113680695A CN113680695A CN202110976991.7A CN202110976991A CN113680695A CN 113680695 A CN113680695 A CN 113680695A CN 202110976991 A CN202110976991 A CN 202110976991A CN 113680695 A CN113680695 A CN 113680695A
- Authority
- CN
- China
- Prior art keywords
- garbage
- image
- robot
- joint
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/34—Sorting according to other particular properties
- B07C5/3412—Sorting according to other particular properties according to a code applied to the object which indicates a property of the object, e.g. quality class, contents or incorrect indication
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/34—Sorting according to other particular properties
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/36—Sorting apparatus characterised by the means used for distribution
- B07C5/361—Processing or control devices therefor, e.g. escort memory
- B07C5/362—Separating or distributor mechanisms
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/36—Sorting apparatus characterised by the means used for distribution
- B07C5/38—Collecting or arranging articles in groups
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Abstract
The invention discloses a robot-based machine vision garbage sorting system, which comprises an IRB1410 robot workstation, a to-be-sorted garbage conveying device, a vision system, an MCGS human-computer interface and three types of garbage collection boxes, wherein the three types of garbage collection boxes are respectively a harmful garbage box, a recyclable garbage box and a dry garbage box; the sorting rate of the system reaches more than 97.0 percent, which shows that the system can reach higher accuracy and feasibility and has certain popularization value in practical engineering.
Description
Technical Field
The invention belongs to the field of garbage classification.
Background
In recent years, the application of robot technology and equipment taking machine vision as a carrier in the intelligent manufacturing industry is more and more extensive, the working process is more automatic and intelligent, the working environment is improved, and the product precision is also improved; meanwhile, garbage classification is a comprehensive issue which puzzles people's lives and environmental governance for a long time, and needs to be solved through scientific and technical means. Most of garbage is caused after the consumption of circulation commodities, the outer package contains a bar code approved and issued by a Chinese coding center, the product name is obtained through visual recognition of the bar code, the product name is matched with a training set, a garbage classification result is output, data are transmitted to the tail end of a robot execution unit to realize sorting, and an important theory and application basis are provided for intelligent research of garbage classification.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the invention provides the robot-based machine vision garbage sorting system with high precision.
The technical scheme is as follows: in order to achieve the purpose, the robot-based machine vision garbage sorting system comprises an IRB1410 robot workstation, a to-be-sorted garbage conveying device, a vision system, an MCGS human-computer interface and three types of garbage collection boxes, wherein the three types of garbage collection boxes are respectively a harmful garbage box, a recyclable garbage box and a dry garbage box;
when the to-be-sorted garbage conveying device conveys the to-be-sorted commodity garbage to a visual detection platform acquisition area, a sensor sends an acquisition signal, a visual system acquires an outline image, a bar code image and position information, and the identified bar code number is matched with a commodity barcode database and a garbage classification training set; then, the pose and classification result information is transmitted to a robot controller, and then the pose information of the image is sent to the tail end of the robot through the controller; and solving the position and the posture of each joint according to the joint transformation matrix, solving parameters of each joint by adopting an inverse kinematics method, driving motors of each joint, calling a PROC program to grab the garbage to be sorted, putting the garbage to a corresponding garbage can, and adapting to the beat of the garbage to be sorted by setting the running speed of the robot.
Further, the inverse kinematics equation of the inverse kinematics method is as follows:
in the above formula:representing a robot end pose matrix;representing a pose transformation matrix from the joint i to the joint i + 1; n represents a vector in the x-axis direction; o represents a vector in the y-axis direction; a represents a vector in the z-axis direction; it is known thatMatrix and each link parameter, and solving the rotation angle theta 1-theta 6 of each joint link:
the joint angles were determined by analytical methods: θ 1, θ 2, θ 3, θ 4, θ 5, θ 6; after the pose of the garbage is obtained through visual positioning, the execution parameters of each joint realize sorting; in the above formula: a. theiRepresenting a pose transformation matrix of the joint i; rot represents a rotation operator; trans denotes the translation operator; the joint i represents a robot joint number; a isiIndicating a link length; alpha is alphaiRepresenting a connecting rod torsion angle; diIndicating a link offset; thetaiIndicating the link angle.
Furthermore, the vision system consists of an image acquisition device, an image processing system, a robot sorting workstation and a control communication transmission system; the image acquisition device consists of a light source, a lens and a camera, wherein the light source adopts annular white light, and the number of camera pixels is 1.3 ten thousand; the image processing system adopts a Halcon machine vision processing platform, converts the processing result to a Winform window of VS2010 through C # hybrid programming, and transmits the processing result to an MCGS interface through TCP/IP communication for displaying and counting; when the system works, when different types of circulating commodities pass through a photoelectric sensor, a conveyer belt triggers sensor signal output, a camera shoots in real time, an image is sent to a Halcon processing platform, appearance and bar code information on a package is extracted through processes of RGB gray level conversion, HSV conversion, threshold segmentation, connected domain closing operation, feature extraction, database matching decoding, display and the like, a solving algorithm is output to a VS2010 window in a C # format, the type is displayed and counted on a human-computer interface synchronously, and a robot finishes sorting;
before image acquisition, a light source is required to be adjusted to be consistent with a camera target area, the photographing effect due to the interference of a conveying belt is eliminated, and the photographing frame number is greater than the movement frequency of the conveying belt; converting the collected RGB image gray level into a gray level image and an HSV image, and displaying color discrimination and extracting characteristics; the characteristic extraction is to select an obvious gray image from the obtained HSV image, extract a contour and a bar code area after threshold segmentation and connected domain operation, compare the obtained numerical value with a known database and match the optimal value of a training set;
the grayscale conversion mechanism is as follows:
g(x,y)=T[f(x,y)]
in the formula: f (x, y) represents a digital image to be processed; g (x, y) represents the processed digital image; t defines the operation of f;
threshold segmentation:
defining the pixel with the gray value i epsilon [0, L-1] as ni, then:
the probability of each gray value occurring is:
for piIs provided with
The image pixels are divided into two types of A and B by using a threshold value T, wherein A is composed of gray values in [0, T-1] pixels, B is composed in [ T, L-1] pixels, and the probability is respectively
The average gray scale of the whole image is represented by u, and the average gray scale of the region A, B to which the average gray scale belongs
The total variance of the regions is:
let T take the value in the interval L-1, makeThe maximum T is the optimal region threshold value, and the image processing effect is optimal; the first-order differential property of the Gaussian function is utilized to convert the problem of edge detection into the problem of maximum value of the detection function, and compromise is obtained between noise suppression and edge detection;
G(x,y)=f(x,y)*H(x,y)
in the formula: l-1 represents the maximum gray value; n represents a gray value; p represents a probability; u represents an average gray;representing the total variance of the region; f denotes image data, H denotes a gaussian function with coefficients omitted, G denotes a filtered smoothed image, (x, y) denotes pixel point position coordinate values, and σ denotes standard deviation.
Has the advantages that: the invention optimizes the technical process of visual system image processing, including image acquisition, gray level conversion, threshold segmentation, flow domain, matching decoding and the like, adopts the Canny algorithm to improve the edge information processing effect, designs a Winform window and an MCGS interface, outputs the analysis result to the robot in a mode of upper computer serial port communication, and executes the tail end to realize accurate sorting. Tests prove that the system has a sorting rate of over 97.0 percent, can achieve higher accuracy and feasibility, and has certain popularization value in practical engineering.
Drawings
FIG. 1 is a schematic diagram of a system architecture;
FIG. 2 is a system operating schematic;
FIG. 3 is a schematic diagram of the movement of an IRB1410 mechanism and coordinate systems of joints;
FIG. 4 is a visual processing flow;
FIG. 5 is an image processing flow;
FIG. 6 is a communication debug interface;
in the attached figure 1: IRB1410 is robot control ware, and 2 is pending rubbish, and 3 is the transfer chain, and 4 are the body, and 5 are the clamping jaw, and 6 are the camera, and 7 are the light source, and 8 are the communication line, and 9 are PC, and 10 are harmful dustbin, and 11 are recoverable dustbin, and 12 are dry dustbin.
Detailed Description
The design mainly comprises an IRB1410 robot workstation, a garbage conveying device to be sorted, a vision acquisition-recognition-communication platform, an MCGS human-computer interface and three types of garbage collection boxes (recoverable garbage, dry garbage and harmful garbage), wherein the robot workstation comprises a body, a controller and a demonstrator, and is shown in figure 1.
When the conveying device conveys the to-be-sorted circulation commodity garbage to a visual detection platform acquisition area, the sensor sends an acquisition signal, the visual system acquires an outline image, a bar code image and position information, and the identified bar code number is matched with a circulation commodity bar code database and a garbage classification training set. The pose (position and attitude) and classification result information are transmitted to a robot controller, parameters of each joint are calculated according to inverse kinematics, a PROC program is called to grab garbage to be sorted and put the garbage to a corresponding garbage can, and the robot can set the running speed to adapt to the beat of the garbage to be sorted without pause; the working principle diagram of the system is shown in figure 2.
IRB1410 robot inverse kinematics solution
The design is that the pose information of the image is sent to the tail end of the robot through a controller, and the position and the posture of each joint are solved according to a joint transformation matrix, so that an inverse kinematics method is adopted for solving, motors of each joint are driven, and a mechanism motion diagram and a coordinate system of each joint are shown in figure 3; IRB1410 robot link parameters are as follows:
in the table: joint i-robot joint number; ai-link length; α i-link torsion angle; di-link offset; θ i — link angle.
The inverse kinematics equation is as follows:
in the formula:-a robot end pose matrix;-a pose transformation matrix of joint i to joint i + 1; a vector in the direction of the n-X axis; a vector in the o-Y direction; a vector in the direction of the a-Z axis.
It is known thatAnd (3) solving the rotation angles theta 1-theta 6 of each joint by using the matrix and the parameters of each connecting rod:
in the formula: a. thei-a pose transformation matrix for joint i; rot-rotation operator; trans-translation operator; X-X coordinate axis, Z-Z coordinate axis.
The joint angles were determined by analytical methods:
θ2=arctan(c1px+s1py)/px
θ3=s2(c1px+s1py)+c2pz
θ6=arctans6/c6
in the formula: p is a radical ofx/py/pz-representing the component of the link length in the X/Y/Z direction, respectively; c. Ci-represents the cosine function cos θi;siRepresenting a sine function sin θi(ii) a In conclusion, the pose of the garbage is acquired through visual positioning and then transmitted to R6, and the sorting is realized through the execution parameters of each joint.
Visual system and working principle
The vision system is composed of an image acquisition device, an image processing system, a robot sorting workstation and a control communication transmission system. The image acquisition comprises a light source, a lens and a camera, wherein the light source adopts annular white light, and the number of camera pixels is 1.3 ten thousand; the image processing system adopts a Halcon machine vision processing platform, converts the processing result to a Winform window of VS2010 through C # hybrid programming, and communicates to an MCGS interface through TCP/IP for displaying and counting, as shown in FIG. 4.
When the automatic sorting machine works, when different types of circulating commodities pass through the photoelectric sensor, a conveying belt triggers sensor signal output, a camera shoots in real time, an image is sent to a Halcon processing platform, appearance and bar code information on a package is extracted through processes of RGB gray level conversion, HSV conversion, threshold segmentation, connected domain closing operation, feature extraction, database matching decoding, display and the like, a solving algorithm is output to a VS2010 window in a C # format, the type is displayed and counted on a human-computer interface synchronously, and a robot finishes sorting.
Image processing
In the links, the light source is required to be adjusted to be consistent with the target area of the camera before image acquisition, the photographing effect is eliminated due to the interference of the conveying belt, and the photographing frame number is greater than the movement frequency of the conveying belt. And converting the collected RGB image gray level into a gray level image and an HSV image, and displaying color discrimination and extracting characteristics. The key point of feature extraction is to select a relatively obvious gray scale image from the obtained HSV image, extract the outline and the bar code region after threshold segmentation and connected domain operation, and the difficulty is that the outline is fitted and EAN bar code decoding is carried out, and the obtained numerical value is compared with a known database to match the optimal value of a training set. The mechanism is as follows:
gray scale conversion
g(x,y)=T[f(x,y)]
In the formula: f (x, y) -digital image to be processed; g (x, y) -processed digital image; t-defines the operation of f.
Threshold segmentation (variance method between maximum classes)
Defining the pixel with the gray value i epsilon [0, L-1] as ni, then:
the probability of each gray value occurring is:
for pi have
The image pixels are divided into two types of A and B by using a threshold value T, wherein A is composed of gray values in [0, T-1] pixels, B is composed in [ T, L-1] pixels, and the probability is respectively
The average gray scale of the whole image is represented by u, and the average gray scale of the region A, B to which the average gray scale belongs is:
the total variance of the regions is:
in the formula: l-1 represents the maximum gray value; n-sum of gray values; p-probability; u-average gray;-the area total variance.
Let T take the value in the interval L-1, makeThe maximum T is the optimal region threshold value, and the image processing effect is optimal.
Canny edge
The Canny edge detection operator is an operator with better edge detection performance, converts the edge detection problem into the problem of maximum value of a detection function by utilizing the first-order differential property of a Gaussian function, and can obtain better compromise between noise suppression and edge detection[8]。
G(x,y)=f(x,y)*H(x,y)
In the formula: f-image data, H-Gaussian function of omission factor, G-filtered smooth image, (x, y) pixel point position coordinate value, and sigma standard deviation.
The barcode image processing process of the garbage external package is shown in fig. 5, and the result text can be correctly read and displayed.
MCGS interface design
To facilitate result visualization, the image analysis results are displayed in real time on the MCGS human-computer interface[9]. Exporting an algorithm on a Halcon platform in a C # format, synchronizing the algorithm to a Winform window of VS2010, and downloading the algorithm to an MCGS human-computer interface through a TCP/IP communication protocol to realize visualization of garbage types and quantity, as shown in FIG. 6;
test materials and environments
The test sample is from a circulation commodity sold in a supermarket, bar codes are arranged on the surface of an outer package, 4765 kinds of garbage information in a circulation commodity database is obtained through tests according to garbage classification standards, 50 kinds of recoverable garbage, 30 kinds of dry garbage and 20 kinds of harmful garbage are respectively selected according to large base number difference of various types of garbage, the length, the width, the height or the diameter are smaller than 100mm, and the recyclable garbage, the dry garbage and the harmful garbage are randomly mixed and loaded in a test tray;
robot system initialization
Before debugging, the robot is initialized. Setting a calibration position of the robot to enable signals of the Hall sensors to be 0, setting other positions to be 1, newly building a Tool coordinate system Tool 1 according to a right-hand rule, enabling an original point to be located at the center of the clamping jaw, enabling a Z axis to be perpendicular to the end face of the flange plate, and calibrating scales for all axes of the robot on the lower table
Test results
The conveyer belt was set at a speed of 100mm/s, and the results of the random sorting test of 100 kinds of 3 kinds of garbage were shown in table 3, indicating that the robot sorting rate in the test was 97.0% or more. The average matching time of the collected bar code and the database is 0.002s, which shows that the design has higher accuracy and stability; the following table shows the test results
And an ABB robot motion control platform taking Halcon machine vision as a carrier sorts the circulated commodity outer package garbage in real time. The system composition and the working principle are introduced, the inverse kinematics equation of the IRB1410 robot body is solved, the technical process of visual system image processing is optimized, the technical process comprises image acquisition, gray level conversion, threshold segmentation, flow domain matching decoding and the like, the Canny algorithm is adopted to improve the edge information processing effect, a Winform window and an MCGS interface are designed, the analysis result is output to the robot in a host computer serial port communication mode, and the tail end is executed to realize accurate sorting. Tests prove that the system has a sorting rate of over 97.0 percent, can achieve higher accuracy and feasibility, and has certain popularization value in practical engineering.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (3)
1. The robot-based machine vision garbage sorting system comprises an IRB1410 robot workstation, a to-be-sorted garbage conveying device, a vision system and three types of garbage collection boxes, wherein the three types of garbage collection boxes are respectively a harmful garbage box, a recyclable garbage box and a dry garbage box;
the method is characterized in that: when the to-be-sorted garbage conveying device conveys the to-be-sorted commodity garbage to a visual detection platform acquisition area, a sensor sends an acquisition signal, a visual system acquires an outline image, a bar code image and position information, and the identified bar code number is matched with a commodity barcode database and a garbage classification training set; then, the pose and classification result information is transmitted to a robot controller, and then the pose information of the image is sent to the tail end of the robot through the controller; and solving the position and the posture of each joint according to the joint transformation matrix, solving parameters of each joint by adopting an inverse kinematics method, driving motors of each joint, calling a PROC program to grab the garbage to be sorted, putting the garbage to a corresponding garbage can, and adapting to the beat of the garbage to be sorted by setting the running speed of the robot.
2. The robot-based machine vision garbage sorting system of claim 1, wherein: the inverse kinematics equation of the inverse kinematics method is as follows:
in the above formula:representing a robot end pose matrix;representing a pose transformation matrix from the joint i to the joint i + 1; n represents a vector in the x-axis direction; o represents a vector in the y-axis direction; a represents a vector in the z-axis direction; it is known thatMatrix and each link parameter, and solving the rotation angle theta 1-theta 6 of each joint link:
the joint angles were determined by analytical methods: θ 1, θ 2, θ 3, θ 4, θ 5, θ 6; after the pose of the garbage is obtained through visual positioning, the execution parameters of each joint realize sorting; in the above formula: a. theiRepresenting a pose transformation matrix of the joint i; rot represents a rotation operator; trans denotes the translation operator; the joint i represents a robot joint number; a isiIndicating a link length; alpha is alphaiRepresenting a connecting rod torsion angle; diIndicating a link offset; thetaiIndicating the link angle.
3. The robot-based machine vision garbage sorting system of claim 1, wherein: the vision system consists of an image acquisition device, an image processing system, a robot sorting workstation and a control communication transmission system; the image acquisition device consists of a light source, a lens and a camera, wherein the light source adopts annular white light, and the number of camera pixels is 1.3 ten thousand; the image processing system adopts a Halcon machine vision processing platform, converts the processing result to a Winform window of VS2010 through C # hybrid programming, and transmits the processing result to an MCGS interface through TCP/IP communication for displaying and counting; when the system works, when different types of circulating commodities pass through a photoelectric sensor, a conveyer belt triggers sensor signal output, a camera shoots in real time, an image is sent to a Halcon processing platform, appearance and bar code information on a package is extracted through processes of RGB gray level conversion, HSV conversion, threshold segmentation, connected domain closing operation, feature extraction, database matching decoding, display and the like, a solving algorithm is output to a VS2010 window in a C # format, the type is displayed and counted on a human-computer interface synchronously, and a robot finishes sorting;
before image acquisition, a light source is required to be adjusted to be consistent with a camera target area, the photographing effect due to the interference of a conveying belt is eliminated, and the photographing frame number is greater than the movement frequency of the conveying belt; converting the collected RGB image gray level into a gray level image and an HSV image, and displaying color discrimination and extracting characteristics; the characteristic extraction is to select an obvious gray image from the obtained HSV image, extract a contour and a bar code area after threshold segmentation and connected domain operation, compare the obtained numerical value with a known database and match the optimal value of a training set;
the grayscale conversion mechanism is as follows:
g(x,y)=T[f(x,y)]
in the formula: f (x, y) represents a digital image to be processed; g (x, y) represents the processed digital image; t defines the operation of f;
threshold segmentation:
let the gray value be i E [0, L-1]Is defined as niAnd then:
the probability of each gray value occurring is:
for piIs provided with
The image pixels are divided into two types of A and B by using a threshold value T, wherein A is composed of gray values in [0, T-1] pixels, B is composed in [ T, L-1] pixels, and the probability is respectively
The average gray scale of the whole image is represented by u, and the average gray scale of the region A, B to which the average gray scale belongs
The total variance of the regions is:
let T take the value in the interval L-1, makeThe maximum T is the optimal region threshold value, and the image processing effect is optimal; the first-order differential property of the Gaussian function is utilized to convert the problem of edge detection into the problem of maximum value of the detection function, and compromise is obtained between noise suppression and edge detection;
G(x,y)=f(x,y)*H(x,y)
in the formula: l-1 represents the maximum gray value; n represents a gray value; p represents a probability; u represents an average gray;representing the total variance of the region; f denotes image data, H denotes a gaussian function with coefficients omitted, G denotes a filtered smoothed image, (x, y) denotes pixel point position coordinate values, and σ denotes standard deviation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110976991.7A CN113680695A (en) | 2021-08-24 | 2021-08-24 | Robot-based machine vision garbage sorting system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110976991.7A CN113680695A (en) | 2021-08-24 | 2021-08-24 | Robot-based machine vision garbage sorting system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113680695A true CN113680695A (en) | 2021-11-23 |
Family
ID=78582228
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110976991.7A Pending CN113680695A (en) | 2021-08-24 | 2021-08-24 | Robot-based machine vision garbage sorting system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113680695A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115488876A (en) * | 2022-06-22 | 2022-12-20 | 湖北商贸学院 | Robot sorting method and device based on machine vision |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103347661A (en) * | 2010-10-21 | 2013-10-09 | 泽恩机器人技术有限公司 | Method for filtering of target object image in robot system |
CN105772403A (en) * | 2016-01-26 | 2016-07-20 | 耿春茂 | Waste classification detecting system |
CN109344894A (en) * | 2018-09-28 | 2019-02-15 | 广州大学 | Garbage classification recognition methods and device based on Multi-sensor Fusion and deep learning |
CN110125036A (en) * | 2019-04-25 | 2019-08-16 | 广东工业大学 | A kind of self-identifying sorting system and method based on template matching |
CN110976338A (en) * | 2019-11-11 | 2020-04-10 | 浙江大学 | Test paper sorting system and method based on machine vision |
CN111077150A (en) * | 2019-12-30 | 2020-04-28 | 重庆医科大学附属第一医院 | Intelligent excrement analysis method based on computer vision and neural network |
CN111260616A (en) * | 2020-01-13 | 2020-06-09 | 三峡大学 | Insulator crack detection method based on Canny operator two-dimensional threshold segmentation optimization |
CN112149573A (en) * | 2020-09-24 | 2020-12-29 | 湖南大学 | Garbage classification and picking robot based on deep learning |
CN112278636A (en) * | 2020-10-15 | 2021-01-29 | 张良虎 | Garbage classification recycling method, device and system and storage medium |
US20210178432A1 (en) * | 2017-06-30 | 2021-06-17 | Boe Technology Group Co., Ltd. | Trash sorting and recycling method, trash sorting device, and trash sorting and recycling system |
-
2021
- 2021-08-24 CN CN202110976991.7A patent/CN113680695A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103347661A (en) * | 2010-10-21 | 2013-10-09 | 泽恩机器人技术有限公司 | Method for filtering of target object image in robot system |
CN105772403A (en) * | 2016-01-26 | 2016-07-20 | 耿春茂 | Waste classification detecting system |
US20210178432A1 (en) * | 2017-06-30 | 2021-06-17 | Boe Technology Group Co., Ltd. | Trash sorting and recycling method, trash sorting device, and trash sorting and recycling system |
CN109344894A (en) * | 2018-09-28 | 2019-02-15 | 广州大学 | Garbage classification recognition methods and device based on Multi-sensor Fusion and deep learning |
CN110125036A (en) * | 2019-04-25 | 2019-08-16 | 广东工业大学 | A kind of self-identifying sorting system and method based on template matching |
CN110976338A (en) * | 2019-11-11 | 2020-04-10 | 浙江大学 | Test paper sorting system and method based on machine vision |
CN111077150A (en) * | 2019-12-30 | 2020-04-28 | 重庆医科大学附属第一医院 | Intelligent excrement analysis method based on computer vision and neural network |
CN111260616A (en) * | 2020-01-13 | 2020-06-09 | 三峡大学 | Insulator crack detection method based on Canny operator two-dimensional threshold segmentation optimization |
CN112149573A (en) * | 2020-09-24 | 2020-12-29 | 湖南大学 | Garbage classification and picking robot based on deep learning |
CN112278636A (en) * | 2020-10-15 | 2021-01-29 | 张良虎 | Garbage classification recycling method, device and system and storage medium |
Non-Patent Citations (3)
Title |
---|
冷玉珊等: "基于MATLAB六自由度串联机器人运动学分析", 《制造业自动化》 * |
冷玉珊等: "基于MATLAB六自由度串联机器人运动学分析", 《制造业自动化》, vol. 42, no. 9, 30 September 2020 (2020-09-30), pages 56 * |
冷玉珊等: "基于MATLAB六自由度串联机器人运动学分析", 制造业自动化, vol. 42, no. 9, pages 56 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115488876A (en) * | 2022-06-22 | 2022-12-20 | 湖北商贸学院 | Robot sorting method and device based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110314854B (en) | Workpiece detecting and sorting device and method based on visual robot | |
CN109785317B (en) | Automatic pile up neatly truss robot's vision system | |
CN110580725A (en) | Box sorting method and system based on RGB-D camera | |
CN105930854A (en) | Manipulator visual system | |
CN111368852A (en) | Article identification and pre-sorting system and method based on deep learning and robot | |
CN111604909A (en) | Visual system of four-axis industrial stacking robot | |
CN105701476A (en) | Machine vision-based automatic identification system and method for production line products | |
CN111266304A (en) | Coal briquette identification, detection and sorting system | |
CN208574964U (en) | A kind of automatic identification and sorting cigarette case device based on machine vision | |
CN203196911U (en) | Industrial CCD (Charge Coupled Device) based visual identification sorting system | |
CN103308524A (en) | PCB automatic optical inspection system | |
WO2022100145A1 (en) | Infusion bag foreign matter identification method and detection method based on artificial intelligence visual technology | |
CN110640741A (en) | Grabbing industrial robot with regular-shaped workpiece matching function | |
CN112497219A (en) | Columnar workpiece classification positioning method based on target detection and machine vision | |
CN113680695A (en) | Robot-based machine vision garbage sorting system | |
CN105154988A (en) | Apparatus automatically extracting down feather and extracting method | |
CN208092786U (en) | A kind of the System of Sorting Components based on convolutional neural networks by depth | |
CN114758236A (en) | Non-specific shape object identification, positioning and manipulator grabbing system and method | |
CN111487192A (en) | Machine vision surface defect detection device and method based on artificial intelligence | |
CN111060518A (en) | Stamping part defect identification method based on instance segmentation | |
CN210233036U (en) | Binocular artificial intelligence arm teaching device | |
Yang et al. | Research on an Automatic Sorting System Based on Machine Vision | |
CN102200780A (en) | Method for realizing 3H charge coupled device (CCD) visual industrial robot | |
CN112156992A (en) | Machine vision teaching innovation platform | |
Patil et al. | Blob detection technique using image processing for identification of machine printed characters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |