CN115872018B - Electronic tray labeling correction system and method based on 3D visual sensing - Google Patents
Electronic tray labeling correction system and method based on 3D visual sensing Download PDFInfo
- Publication number
- CN115872018B CN115872018B CN202211637781.6A CN202211637781A CN115872018B CN 115872018 B CN115872018 B CN 115872018B CN 202211637781 A CN202211637781 A CN 202211637781A CN 115872018 B CN115872018 B CN 115872018B
- Authority
- CN
- China
- Prior art keywords
- tray
- electronic
- feeding mechanism
- point cloud
- vision sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002372 labelling Methods 0.000 title claims abstract description 58
- 238000012937 correction Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000000007 visual effect Effects 0.000 title claims abstract description 21
- 230000007246 mechanism Effects 0.000 claims abstract description 94
- 239000012776 electronic material Substances 0.000 claims abstract description 72
- 241000239290 Araneae Species 0.000 claims abstract description 41
- 238000004891 communication Methods 0.000 claims abstract description 14
- 238000003860 storage Methods 0.000 claims description 28
- 239000000463 material Substances 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 abstract description 10
- 230000005540 biological transmission Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- RVCKCEDKBVEEHL-UHFFFAOYSA-N 2,3,4,5,6-pentachlorobenzyl alcohol Chemical compound OCC1=C(Cl)C(Cl)=C(Cl)C(Cl)=C1Cl RVCKCEDKBVEEHL-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000013072 incoming material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Manipulator (AREA)
Abstract
The invention discloses an electronic material tray labeling correction system and method based on 3D visual sensing, comprising a control system, a first feeding mechanism, a 3D visual sensor, a spider hand robot, a second feeding mechanism and a plurality of labeling devices, wherein the first feeding mechanism, the 3D visual sensor, the spider hand robot, the second feeding mechanism and the labeling devices are in communication connection with the control system; the first feeding mechanism conveys a tray box filled with the electronic tray to a preset position; the 3D vision sensor is arranged above a fixed position of the first feeding mechanism and is used for shooting a tray box, an electronic tray forming image in the tray box and acquiring point cloud data of the surface of the electronic tray in the electronic tray image; the control system acquires images and point cloud data uploaded by the 3D vision sensor and controls the operation of the first feeding mechanism, the spider hand robot, the second feeding mechanism and the labeling equipment; the invention improves the production efficiency of the production line, improves the correction accuracy and increases the application range.
Description
Technical Field
The invention belongs to the technical field of vision deviation correction detection, and relates to an electronic material tray labeling deviation correction system and method based on 3D vision sensing.
Background
Electronic materials are the most basic elements constituting electronic products, and an SMT (surface mount technology) is generally adopted to mount electronic components on a PCBA substrate; electronic materials are various and are usually wound on a plastic disc for storage, and the electronic field is called as an electronic material disc; the electronic material tray is provided with labels so as to facilitate the distinction and use of users, the labeling positions are generally provided with two positions, the two labeling positions are respectively provided with a specified position, generally, the incoming material is provided with one labeling position, the other labeling position is provided with a labeling position, the accuracy of the labeling position is ensured by correction processing, the labeling position is only required to be identified by the traditional 2D algorithm, the labeling position is relatively known at the other labeling position, a servo correction mechanism is added, so that the labeling position correction is realized, the labeling machine is enabled to accurately label, however, the traditional 2D vision algorithm is realized to have to take a static image integrally, then calculate the labeling position, the required visual field is large and is easily interfered by environmental light, the manual pressing of the material is required, the working flow is complicated and wastes manpower, the overall assembly line beat is slow, the quick production requirement cannot be met, the different size material trays are difficult to distinguish, and even the auxiliary mechanism is further added.
Aiming at the problems, the Chinese patent application with the authorized bulletin number of CN110002070B discloses full-automatic labeling equipment for electronic elements based on visual deviation correction, which comprises a first transmission mechanism, a second transmission mechanism, a third transmission mechanism and a labeling device, wherein the first transmission mechanism, the second transmission mechanism and the third transmission mechanism are sequentially connected, the labeling device is positioned on the second transmission mechanism, the first transmission mechanism is provided with a scanning assembly, the third transmission mechanism is provided with a detection assembly, the scanning assembly is in communication connection with the labeling device, a mother code on a circuit board is acquired by utilizing the scanning assembly, the labeling device is used for labeling character code labels corresponding to the mother code on the circuit board, then the detection assembly is used for carrying out visual deviation correction detection on the positions and the positive and negative polarities of the labeled labels, so that the automatic labeling of the circuit board is realized, and the electronic elements have the advantages of high efficiency, high labeling precision and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an electronic material tray labeling correction system and method based on 3D visual sensing, which can improve the production efficiency of a production line, the correction accuracy and the application range.
The technical scheme adopted by the invention is as follows:
an electronic tray labeling deviation correcting system based on 3D visual sensing comprises a control system, a first feeding mechanism, a 3D visual sensor, a spider hand robot, a second feeding mechanism and a plurality of labeling devices, wherein the first feeding mechanism, the 3D visual sensor, the spider hand robot, the second feeding mechanism and the labeling devices are in communication connection with the control system;
The first feeding mechanism conveys a tray box filled with the electronic tray to a preset position;
The 3D vision sensor is arranged above a fixed position of the first feeding mechanism and is used for shooting the tray box and the electronic tray images in the tray box and acquiring point cloud data of the surface of the electronic tray in the electronic tray images;
the control system acquires the image and the point cloud data uploaded by the 3D vision sensor and controls the operation of the first feeding mechanism, the spider hand robot, the second feeding mechanism and the labeling equipment;
The spider hand robot is arranged above the first feeding mechanism corresponding to the position of the 3D vision sensor, and the spider hand robot grabs the electronic material tray onto the second feeding mechanism according to a control instruction sent by the control system and places the electronic material tray at a preset position according to a specified placing mode;
the second feeding mechanism is arranged on one side of the first feeding mechanism corresponding to the position of the 3D vision sensor, and is used for conveying the electronic material trays which are grabbed by the spider hand robot from the material tray box;
The labeling equipment is arranged at one side of the second feeding mechanism, and labels the electronic material tray according to a control instruction sent by the control system;
The control system comprises a processor module and a storage module, wherein the storage module is used for storing data materials of the electronic trays with various sizes, data of correct labeling positions of the electronic trays with various sizes and a control program, and the processor module is used for executing the control program so as to rectify the electronic trays.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention stores the memory module in the control system, scans and obtains the coordinate information through the 3D vision sensor, compares the vector of the labeled coordinate on the electronic material tray needing to be rectified and the coordinate of the center of the electronic material tray needing to be rectified with the memory module, determines the rectifying angle, compares the Y axis of the center position of the labeled with the Y axis of the center position of the labeled in the template, determines the direction of the rectifying angle, and finally the control system integrates the rectifying data and sends a signal to the spider hand robot for grabbing and rectifying through the TCP/IP communication protocol, so that the rectifying program is rapid and accurate, the improvement of the production efficiency is facilitated, and the problem that the rectifying program is not provided for only classifying and identifying the electronic elements with labels to be rectified is avoided, and the production efficiency on the production line is affected;
2. According to the invention, after the electronic material trays needing correction are scanned by the 3D vision sensor to obtain surface point cloud data, the control system extracts the center coordinates and the diameters, and the electronic material trays with different sizes are identified after the electronic material trays are compared with the storage module, so that the system can distinguish electronic material trays with different sizes and types, and the problem that only a single electronic element can be identified, so that the application range is narrow is avoided.
Further, the top of first feed mechanism is provided with the support frame, the support frame includes four sufficient braced frame and fixes the roof at four sufficient braced frame tops, spider hand robot and 3D vision sensor all install the downside of roof.
Further, the second feeding mechanism is perpendicular to the first feeding mechanism.
Further, the first feeding mechanism conveys the electronic material trays to be rectified, which are placed in the material tray box, and the second feeding mechanism conveys the electronic material trays which are rectified and grabbed by the spider hand robot.
An electronic tray labeling deviation correcting method based on 3D visual sense, wherein the deviation correcting method is based on the electronic tray labeling deviation correcting system based on 3D visual sense of claim 1 to execute a control program of a storage module, and the control program comprises a main thread and a sub-thread;
the control program executed by the processor in the main thread is as follows: the control system sends an instruction to the first feeding mechanism, so that the first feeding mechanism transmits the data to the scanning range of the 3D vision sensor in the tray box, and the point cloud data on the surface of the electronic tray is obtained after scanning;
extracting boundary point clouds of the electronic material trays from the point cloud data, fitting, calculating center coordinates and diameters of the electronic material trays, and comparing the center coordinates and the diameters with data stored in a storage module to identify the electronic material trays with different sizes;
the control program executed by the processor in the sub-thread is as follows: according to the image obtained after the 3D vision sensor scans, the control system controls the left camera of the 3D vision sensor to expose and obtain an image with proper brightness;
Positioning the labeled position according to the image, identifying the labeled mass center through the point cloud data and calculating a three-dimensional coordinate point of the mass center;
determining a correction angle by calculating a vector from the central coordinate of the electronic material tray to the central coordinate of the labeled material tray and an included angle between the vector and the vector in the storage module, and determining positive and negative of the correction angle according to the direction of the Y-axis of the central coordinate of the labeled material tray relative to the Y-axis of the central coordinate of the electronic material tray, wherein the positive and negative represent the forward and reverse directions of the correction angle;
The main thread sends an instruction to the spider hand robot in real time through a TCP/IP communication protocol to grasp and rectify the deviation angle calculated by the sub thread.
Further, the specific method of the main thread comprises the following steps: scanning, identifying, classifying and transmitting deviation correcting angles;
s1, scanning, namely scanning an electronic charging tray through a 3D vision sensor to obtain the point cloud data on the surface of the electronic charging tray;
S2, identifying, namely extracting boundary data of the electronic material tray through point cloud denoising and point cloud segmentation, fitting, and calculating the center coordinates and the diameter of the electronic material tray after fitting;
the specific steps identified in step S2 are as follows:
s21, identifying discrete point cloud values of the boundary of the electronic material tray from the point cloud data through point cloud denoising;
S22, eliminating point clouds outside the discrete point cloud values through point cloud segmentation;
S23, performing circle fitting on the discrete point cloud values of the boundary to form point cloud data of the electronic material tray;
step S24, calculating the center coordinates and the diameters of the electronic material trays through the fitted point cloud data of the electronic material trays;
And S3, classifying, namely comparing the calculated coordinates and the calculated diameter of the center of the electronic material tray with data in the storage module to obtain the electronic material tray with the corresponding size.
Further, the specific method of the sub-thread comprises the following steps: coordinate calculation, vector comparison and deviation correction angle determination;
S4, coordinate calculation, namely after a 3D vision sensor scans to obtain point clouds, automatically adjusting the exposure of a left camera of the 3D vision sensor to obtain an image with proper brightness, roughly positioning the approximate position of a labeled through AI, precisely positioning through image processing, calculating the point of a labeled area, and obtaining the labeled point cloud data range from the point cloud data and identifying a label centroid through ordered point cloud mapping to obtain a three-dimensional coordinate point of the labeled centroid;
s5, vector comparison, namely calculating a numerical vector from the center of the electronic material tray to be rectified to the center of the labeled label, comparing the numerical vector with the numerical vector in the storage module to obtain an included angle, and determining the rectifying angle;
S6, determining a deviation correcting angle, comparing the Y axis of the center position coordinate of the labeled with the Y axis of the center position coordinate of the labeled in the storage module, determining the direction of a correction angle, wherein the correction angle is negative if the Y coordinates of the correction angle and the Y coordinates of the correction angle are subtracted to be negative, and the correction angle is positive if the Y coordinates of the correction angle and the Y coordinates of the correction angle are subtracted to be positive;
and step S7, transmitting the deviation rectifying angle, and transmitting the deviation rectifying angle and the deviation rectifying direction calculated by the sub-thread to the spider hand robot by the control system through a TCP/IP communication protocol based on the steps S4, S5 and S6.
Further, in the another processing method of the coordinate calculation in the step S4, after the scanning by the 3D vision sensor, the point cloud data and the image on the surface of the electronic tray are not scanned, the processor module sends a signal identifying as an empty box to the control system, the control system sends a signal transmitted forward to the first feeding mechanism, the first feeding mechanism transmits the empty tray box forward to transmit the empty tray box out of the scanning range of the 3D vision sensor and transmits the tray box of the electronic tray to be corrected to the scanning range of the 3D vision sensor, the main thread and the sub thread of the control program are repeatedly performed to perform labeling correction, the empty tray box is transmitted forward until the empty tray box is separated from the conveyor belt, and the empty tray box is retracted by a worker.
In a word, the invention has the advantages of improving the production efficiency of the production line, improving the correction accuracy and enlarging the application range.
Drawings
FIG. 1 is a flow chart of a deviation correcting system according to the present invention.
Fig. 2 is a schematic perspective view of a deviation correcting device according to the present invention.
Fig. 3 is a rear view of fig. 2.
Fig. 4 is a left side view of fig. 2.
In the figure, 1, spider hand robot, 2, 3D vision sensor, 3, electron charging tray, 4, second feed mechanism, 5, first feed mechanism, 6, four-legged braced frame, 7, roof, 8, charging tray case.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1 and fig. 2, an electronic tray labeling deviation correcting system based on 3D vision sensing includes: the automatic labeling device comprises a control system, a first feeding mechanism 5, a 3D visual sensor 2, a spider hand robot 1, a second feeding mechanism 4 and a plurality of labeling devices, wherein the first feeding mechanism 5, the 3D visual sensor 2, the spider hand robot 1 and the second feeding mechanism 4 are in communication connection with the control system;
the first feeding mechanism 5 conveys a tray box 8 filled with the electronic tray 3 to a preset position;
the 3D vision sensor 2 is disposed above a fixed position of the first feeding mechanism 5, and is configured to capture images of the tray box 8 and the electronic tray 3 in the tray box 8, and obtain point cloud data of the surface of the electronic tray 3 in the images of the electronic tray 3;
The control system acquires the image and the point cloud data uploaded by the 3D vision sensor 2 and controls the operation of the first feeding mechanism 5, the spider hand robot 1, the second feeding mechanism 4 and the labeling equipment;
The spider hand robot 1 is arranged above a first feeding mechanism corresponding to the position of the 3D vision sensor 3, and the spider hand robot 1 grabs the electronic material tray 3 onto a second feeding mechanism 4 and places the electronic material tray 3 at a preset position according to a specified placing mode according to a control instruction sent by the control system;
The second feeding mechanism 4 is arranged at one side of the first feeding mechanism 5 corresponding to the position of the 3D vision sensor 3, and the second feeding mechanism 4 is used for conveying the electronic material trays 3 which are grabbed by the spider hand robot 1 from the material tray box 8;
The labeling equipment is arranged at one side of the second feeding mechanism 4 and labels the electronic tray 3 according to a control instruction sent by the control system;
The control system comprises a processor module and a storage module, wherein the storage module is used for storing data materials of the electronic trays 3 with various sizes, data of correct labeling positions of the electronic trays 3 with various sizes and a control program, and the processor module is used for executing the control program so as to rectify the deviation of the electronic trays 3.
As shown in fig. 2, fig. 3 and fig. 4, the support frame is arranged above the first feeding mechanism 5, the support frame comprises a four-foot support frame 6 and a top plate 7 fixed at the top of the four-foot support frame 6, the spider hand robot 1 and the 3D vision sensor 2 are all installed on the lower side surface of the top plate 7, the second feeding mechanism 4 is perpendicular to the first feeding mechanism 5, the first feeding mechanism 5 conveys the electronic material disc 3 to be rectified in the material disc box 8, the second feeding mechanism 4 conveys the electronic material disc 3 which is already rectified by the spider hand robot 1, the spider hand robot 1 is arranged in the center of the top plate 7, the 3D vision sensor 2 is arranged on the left side of the spider hand robot 1, the 3D vision sensor 2, the first feeding mechanism 5 and the second feeding mechanism 4 are all connected with a control system in a communication manner, the spider hand robot 2 performs rectification on the electronic material disc 3 placed on the first feeding mechanism 5, and the cloud hand robot 1 performs a cloud point data transmission and a cloud point transmission system to generate data to the cloud point transmission system, and the cloud point transmission system is controlled by the cloud point transmission control system, and the cloud point transmission system is controlled by the cloud point transmission system and the cloud transmission system.
As shown in fig. 1 and fig. 2, a 3D vision sensing-based electronic tray labeling deviation correcting method, wherein the deviation correcting method is based on the control program of a storage module executed by the 3D vision sensing-based electronic tray labeling deviation correcting system according to claim 1, and is characterized in that the control program comprises a main thread and a sub thread;
the control program executed by the processor in the main thread is as follows: the control system sends an instruction to the first feeding mechanism 5, so that the first feeding mechanism 5 transmits the data to the scanning range of the 3D vision sensor 2 in the tray box 8, and the point cloud data on the surface of the electronic tray 3 is obtained after scanning;
Extracting boundary point clouds of the electronic material trays 3 from the point cloud data, fitting, calculating center coordinates and diameters of the electronic material trays 3, and comparing the center coordinates and the diameters with data stored in a storage module to identify the electronic material trays 3 with different sizes;
the control program executed by the processor in the sub-thread is as follows: according to the image obtained after the 3D vision sensor 2 scans, the control system controls the left camera of the 3D vision sensor 2 to expose and obtain an image with proper brightness;
Positioning the labeled position according to the image, identifying the labeled mass center through the point cloud data and calculating a three-dimensional coordinate point of the mass center;
calculating the centroid three-dimensional coordinate point is through an operator pc1 in a third party open source point cloud processing library pcl: : computer 3DCentroid () calculates the point cloud centroid as follows:
Determining a correction angle by calculating a vector from the central coordinate of the electronic material tray 3 to the labeled central coordinate, and determining the positive and negative of the correction angle according to the direction of the Y axis of the labeled central position coordinate relative to the Y axis of the central position coordinate of the electronic material tray 3, wherein the positive and negative represent the forward and reverse directions of the correction angle;
the calculation formula of the vector is as follows: knowing the direction vector of point A (x 1,y1,z1),B(x2,y2,z2), AB The method comprises the following steps: (x 2-x1,y2-y1,z2-z1)/>The unit vector (x m,ym,zm) of (a) is:
And/> The calculation formula of the included angle theta is as follows:
Wherein the value range of θ is: [0, pi ];
the main thread sends an instruction to the spider hand robot 1 in real time through a TCP/IP communication protocol to grasp and rectify the deviation angle calculated by the sub thread.
As shown in fig. 1, the specific method of the main thread includes: scanning, identifying, classifying and transmitting deviation correcting angles;
Step S1, scanning, namely scanning the electronic material tray 3 through the 3D vision sensor 2 to obtain the point cloud data on the surface of the electronic material tray 3;
S2, identifying, namely extracting boundary data of the electronic material tray 3 through point cloud denoising and point cloud segmentation, fitting, and calculating the center coordinates and the diameter of the electronic material tray 3 after fitting;
In step S2, the method for calculating the center coordinates of the electronic tray 3 uses an iterative weighted least squares method (IRLS), which solves the problem of inaccurate fitting of the outlier excessive least squares method to a certain extent, and has two weight functions, namely a Huber weight function and a Tukey weight function.
Tukey weight function is
Tukey weight function is
Fitting a circle formula based on IRLS method
The equation of a circle is
(x-a)2+(y-b)2=c2 (3)
Establishing a minimization error function E
Wherein a= -2a, b= -2b, c=a 2+b2-c2
Introducing a distance weight wi
Derivative of A, B, C
The transformation matrix is in the form of
A, B, C can be solved by the above method, and then a, b and c can be solved
When the first iteration is carried out, a standard least square method is adopted for fitting a circle, namely w i =1;
the specific steps identified in step S2 are as follows:
Step S21, identifying discrete point cloud values of the boundary of the electronic tray 3 from the point cloud data through point cloud denoising;
S22, eliminating point clouds outside the discrete point cloud values through point cloud segmentation;
step S23, performing circle fitting on the discrete point cloud values of the boundary to form point cloud data of the electronic tray 3;
step S24, calculating the center coordinates and the diameters of the electronic material trays 3 through the fitted point cloud data of the electronic material trays 3;
step S3, classifying, namely comparing the calculated coordinates and diameter of the center of the electronic tray 3 with data in a storage module to obtain the electronic tray 3 with the corresponding size;
the specific method of the sub-thread comprises the following steps: coordinate calculation, vector comparison and deviation correction angle determination;
S4, coordinate calculation, namely after a 3D vision sensor 2 scans to obtain point clouds, automatically adjusting left camera exposure of the 3D vision sensor 2 to obtain an image with proper brightness, roughly positioning the marked approximate position through AI (analog input) of the image, precisely positioning through image processing, calculating the points of a marked area, obtaining the marked point cloud data range from the point cloud data through ordered point cloud mapping, identifying the mass center of the mark, and obtaining the three-dimensional coordinate point of the mass center of the marked;
S5, vector comparison, namely calculating a numerical vector from the center of the electronic tray 3 to be corrected to the center of the labeled label, comparing the numerical vector with the numerical vector in the storage module to obtain an included angle, and determining the correction angle;
S6, determining a deviation correcting angle, comparing the Y axis of the center position coordinate of the labeled with the Y axis of the center position coordinate of the labeled in the storage module, determining the direction of a correction angle, wherein the correction angle is negative if the Y coordinates of the correction angle and the Y coordinates of the correction angle are subtracted to be negative, and the correction angle is positive if the Y coordinates of the correction angle and the Y coordinates of the correction angle are subtracted to be positive;
step S7, transmitting the deviation rectifying angle, and transmitting the deviation rectifying angle and the deviation rectifying direction calculated by the sub-thread to the spider hand robot 1 by the control system through a TCP/IP communication protocol based on the steps S4, S5 and S6;
In the other processing method of the coordinate calculation in the step S4, after the scanning by the 3D vision sensor 2, the point cloud data and the image on the surface of the electronic tray 3 are not scanned, the processor module sends a signal identifying as an empty box to the control system, the control system sends a signal transmitted forward to the first feeding mechanism 5, the first feeding mechanism 5 transmits the empty tray box 8 out of the scanning range of the 3D vision sensor 2 and transmits the tray box 8 of the electronic tray 3 which is full of the next electronic tray 3 to be rectified into the scanning range of the 3D vision sensor 2, so that the label pasting rectification is carried out by repeating the program of the main thread and the sub thread of the control program, the empty tray box 8 is transmitted forward until the empty tray box is separated from the conveyor belt, and the empty tray box is recovered by a worker.
In this embodiment, the 3D vision sensor 2 may be a commercial binocular line scanning laser 3D vision sensor with a model number AT-S1000-01A-S1, and the spider hand robot 1 may be a commercial spider hand robot with a model number BX4-650/800/1100/1300, and the spider hand robot 1 is controlled by a control system by sending signals through a TCP/IP communication protocol.
When the system is implemented, before the system is started, information of different sizes of electronic trays 3 to be rectified is input into a storage module of the control system, after the system is started, a 3D vision sensor 2 starts scanning, point cloud data are generated after the electronic trays 3 to be rectified in a tray box 8 placed on a first feeding mechanism 5 are scanned, the point cloud data are transmitted to the control system and the storage module to be matched, the size of the electronic trays 3 to be rectified is identified after the data are matched, meanwhile, the control system identifies the center position coordinates of labels of the electronic trays 3 to be rectified through the scanned images and the center coordinates of the electronic trays 3 to be rectified, a vector between the two coordinates is calculated, the vector is compared with the coordinate vector of the electronic trays 3 with corresponding sizes in the storage module, the rectification angle is obtained, the center coordinate Y axis of labels to be rectified on the electronic trays 3 is compared with the center coordinate Y axis of labels set in the storage module, the rectification angle is determined, the rectification angle and the direction is well determined, and finally, after the control system is integrated with the IP data to be integrated with a control protocol, the electronic trays 3 to be rectified on a second hand, and a material handling system is required to be rectified by a machine tool, and a gripper 4 is used for carrying out material rectification signal pickup;
If the electronic tray 3 to be rectified in the tray box 8 is grabbed, the empty box is identified by the 3D vision sensor 2, a signal of not acquiring point cloud data is transmitted to the control system, the control system sends a signal to the first feeding mechanism 5 to be transmitted forwards, the empty tray box 8 is transmitted out of the scanning range of the 3D vision sensor 2, the tray box 8 of the electronic tray 3 to be rectified, which is fully filled next, is transmitted into the scanning range of the 3D vision sensor 2, the empty tray box 8 is transmitted forwards until the empty tray box is separated from the conveyor belt, and the empty tray box is recovered by a worker.
Although the present invention has been described with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements and changes may be made without departing from the spirit and principles of the present invention.
Claims (8)
1. Electronic tray pastes mark rectifying system based on 3D vision sensing, its characterized in that includes:
The automatic labeling device comprises a control system, a first feeding mechanism (5), a 3D vision sensor (2), a spider hand robot (1), a second feeding mechanism (4) and a plurality of labeling devices, wherein the first feeding mechanism, the 3D vision sensor, the spider hand robot (1) and the second feeding mechanism are in communication connection with the control system;
The first feeding mechanism (5) conveys a tray box (8) filled with the electronic tray (3) to a preset position;
The 3D vision sensor (2) is arranged above a fixed position of the first feeding mechanism (5) and is used for shooting the tray box (8) and the electronic tray (3) in the tray box (8) to form images and acquiring point cloud data of the surface of the electronic tray (3) in the images of the electronic tray (3);
The control system acquires the image and the point cloud data uploaded by the 3D vision sensor (2) and controls the operation of the first feeding mechanism (5), the spider hand robot (1), the second feeding mechanism (4) and the labeling equipment;
The spider hand robot (1) is arranged above a first feeding mechanism corresponding to the position of the 3D vision sensor (3), and the spider hand robot (1) grabs the electronic material tray (3) onto a second feeding mechanism (4) according to a control instruction sent by the control system and places the electronic material tray at a preset position according to a specified placing mode;
The second feeding mechanism (4) is arranged on one side of the first feeding mechanism (5) corresponding to the position of the 3D vision sensor (3), and the second feeding mechanism (4) is used for conveying the electronic material tray (3) after the spider hand robot (1) grabs from the material tray box (8);
the labeling equipment is arranged at one side of the second feeding mechanism (4) and labels the electronic material tray (3) according to a control instruction sent by the control system;
The control system comprises a processor module and a storage module, wherein the storage module is used for storing data of the electronic trays (3) with various sizes, data of correct labeling positions of the electronic trays (3) with various sizes and a control program, and the processor module is used for executing the control program so as to rectify the deviation of the electronic trays (3).
2. The electronic tray labeling deviation correcting system based on 3D vision sensing according to claim 1, wherein a supporting frame is arranged above the first feeding mechanism (5), the supporting frame comprises a four-foot supporting frame (6) and a top plate (7) fixed at the top of the four-foot supporting frame (6), and the spider hand robot (1) and the 3D vision sensor (2) are both arranged on the lower side surface of the top plate (7).
3. The electronic tray labeling deviation correcting system based on 3D visual sensing according to claim 2, wherein the second feeding mechanism (4) is perpendicular to the first feeding mechanism (5).
4. The electronic tray labeling deviation correcting system based on 3D vision sensing according to claim 3, wherein the first feeding mechanism (5) conveys the electronic tray (3) to be corrected placed in the tray box (8), and the second feeding mechanism (4) conveys the electronic tray (3) which has been gripped and corrected by the spider hand robot (1).
5. An electronic tray labeling deviation correcting method based on 3D visual sense, which is based on the electronic tray labeling deviation correcting system based on 3D visual sense of claim 1 to execute a control program of a storage module, and is characterized in that the control program comprises a main thread and a sub thread;
the control program executed by the processor in the main thread is as follows: the control system sends an instruction to the first feeding mechanism (5), so that the first feeding mechanism (5) transmits the data which are placed in the tray box (8) to the scanning range of the 3D vision sensor (2), and the data of the point cloud on the surface of the electronic tray (3) are obtained after scanning;
extracting boundary point clouds of the electronic material trays (3) from the point cloud data, fitting, calculating center coordinates and diameters of the electronic material trays (3), and comparing the center coordinates and diameters with data stored in a storage module to identify the electronic material trays (3) with different sizes;
The control program executed by the processor in the sub-thread is as follows: according to the image obtained after the 3D vision sensor (2) scans, the control system controls the left camera of the 3D vision sensor (2) to expose and obtain an image with proper brightness;
Positioning the labeled position according to the image, identifying the labeled mass center through the point cloud data and calculating a three-dimensional coordinate point of the mass center;
determining a correction angle by calculating a vector from the central coordinate of the electronic material tray (3) to the labeled central coordinate, and determining the positive and negative of the correction angle according to the direction of the Y axis of the labeled central position coordinate relative to the Y axis of the central position coordinate of the electronic material tray (3), wherein the positive and negative represent the forward and reverse directions of the correction angle;
The main thread sends an instruction to the spider hand robot (1) in real time through a TCP/IP communication protocol to grasp and rectify the deviation angle calculated by the sub thread.
6. The electronic tray labeling deviation correcting method based on 3D visual sense of claim 5, wherein,
The specific method of the main thread comprises the following steps: scanning, identifying, classifying and transmitting deviation correcting angles;
s1, scanning, namely scanning an electronic material tray (3) through a 3D vision sensor (2) to obtain the surface point cloud data of the electronic material tray (3);
S2, identifying, namely extracting boundary data of the electronic material tray (3) through point cloud denoising and point cloud segmentation, fitting, and calculating the center coordinates and the diameters of the electronic material tray (3) after fitting;
the specific steps identified in step S2 are as follows:
s21, identifying discrete point cloud values of the boundary of the electronic material tray (3) from point cloud data through point cloud denoising;
S22, eliminating point clouds outside the discrete point cloud values through point cloud segmentation;
S23, performing circle fitting on the discrete point cloud values of the boundary to form point cloud data of the electronic material tray (3);
S24, calculating the center coordinates and the diameters of the electronic material trays (3) through the fitted point cloud data of the electronic material trays (3);
And S3, classifying, and comparing the calculated coordinates and diameter of the center of the electronic material tray (3) with data in a storage module to obtain the electronic material tray (3) with the corresponding size.
7. The electronic tray labeling deviation correcting method based on 3D visual sense of claim 5, wherein,
The specific method of the sub-thread comprises the following steps: coordinate calculation, vector comparison and deviation correction angle determination;
s4, coordinate calculation, namely, after a 3D vision sensor (2) scans to obtain a point cloud, automatically adjusting the exposure of a left camera of the 3D vision sensor (2) to obtain an image with proper brightness, roughly positioning the marked approximate position through AI (analog input) of the image, precisely positioning through image processing, calculating the point of a marked area, obtaining the point cloud data range of the marked point from the point cloud data through ordered point cloud mapping, identifying the mass center of the mark, and obtaining the three-dimensional coordinate point of the mass center of the marked point;
S5, vector comparison, namely calculating a numerical vector from the center of the electronic material tray (3) to be rectified to the labeled center, comparing the numerical vector with the numerical vector in the storage module to obtain an included angle, and determining the rectifying angle;
S6, determining a deviation correcting angle, comparing the Y axis of the center position coordinate of the labeled with the Y axis of the center position coordinate of the labeled in the storage module, determining the direction of a correction angle, wherein the correction angle is negative if the Y coordinates of the correction angle and the Y coordinates of the correction angle are subtracted to be negative, and the correction angle is positive if the Y coordinates of the correction angle and the Y coordinates of the correction angle are subtracted to be positive;
And step S7, transmitting the deviation rectifying angle, and transmitting the deviation rectifying angle and the deviation rectifying direction calculated by the sub-thread to the spider hand robot (1) by the control system through a TCP/IP communication protocol based on the steps S4, S5 and S6.
8. The 3D vision sensing-based electronic tray labeling correction method as claimed in claim 7, wherein,
In the other processing method of the coordinate calculation in the step S4, after being scanned by the 3D vision sensor (2), the point cloud data and the image on the surface of the electronic material tray (3) are not scanned, the processor module sends a signal identifying an empty box to the control system, the control system sends a signal transmitted forward to the first feeding mechanism (5), the first feeding mechanism (5) transmits the empty material tray box (8) forward to the scanning range of the 3D vision sensor (2), the material tray box (8) fully filled with the electronic material tray (3) to be rectified is transmitted to the scanning range of the 3D vision sensor (2), the main thread and the sub-thread of the control program are repeatedly carried out to label and rectify, the empty material tray box (8) is always transmitted forward until the empty material tray box is separated from the conveyor belt, and the empty material tray box is recovered by staff.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211637781.6A CN115872018B (en) | 2022-12-16 | 2022-12-16 | Electronic tray labeling correction system and method based on 3D visual sensing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211637781.6A CN115872018B (en) | 2022-12-16 | 2022-12-16 | Electronic tray labeling correction system and method based on 3D visual sensing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115872018A CN115872018A (en) | 2023-03-31 |
CN115872018B true CN115872018B (en) | 2024-06-25 |
Family
ID=85755233
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211637781.6A Active CN115872018B (en) | 2022-12-16 | 2022-12-16 | Electronic tray labeling correction system and method based on 3D visual sensing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115872018B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835173A (en) * | 2015-05-21 | 2015-08-12 | 东南大学 | Positioning method based on machine vision |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2849760B2 (en) * | 1989-12-26 | 1999-01-27 | 三菱レイヨン株式会社 | Label displacement inspection device |
DE202017101768U1 (en) * | 2017-03-28 | 2018-06-29 | Krones Ag | Rotary machine for handling and in particular labeling of containers |
CN207712454U (en) * | 2017-12-13 | 2018-08-10 | 深圳易航联创科技有限公司 | Labelling machine |
-
2022
- 2022-12-16 CN CN202211637781.6A patent/CN115872018B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835173A (en) * | 2015-05-21 | 2015-08-12 | 东南大学 | Positioning method based on machine vision |
Non-Patent Citations (1)
Title |
---|
基于机器视觉的圆钢实时贴标系统设计;阎岩;刘建培;王会庆;薛泽;;内燃机与配件;20180725(第14期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115872018A (en) | 2023-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110497187B (en) | Sun flower pattern assembly system based on visual guidance | |
CN108827154B (en) | Robot non-teaching grabbing method and device and computer readable storage medium | |
CN109483554B (en) | Robot dynamic grabbing method and system based on global and local visual semantics | |
CN109230580B (en) | Unstacking robot system and unstacking robot method based on mixed material information acquisition | |
CN110580725A (en) | Box sorting method and system based on RGB-D camera | |
CN108717715B (en) | Automatic calibration method for linear structured light vision system of arc welding robot | |
CN110125926B (en) | Automatic workpiece picking and placing method and system | |
US10290118B2 (en) | System and method for tying together machine vision coordinate spaces in a guided assembly environment | |
US8295588B2 (en) | Three-dimensional vision sensor | |
US11972589B2 (en) | Image processing device, work robot, substrate inspection device, and specimen inspection device | |
CN107192331A (en) | A kind of workpiece grabbing method based on binocular vision | |
CN108480239B (en) | Workpiece quick sorting method and device based on stereoscopic vision | |
CN110666805A (en) | Industrial robot sorting method based on active vision | |
CN114758236B (en) | Non-specific shape object identification, positioning and manipulator grabbing system and method | |
CN108748149B (en) | Non-calibration mechanical arm grabbing method based on deep learning in complex environment | |
CN108177150A (en) | Door of elevator positioning and grabbing device and the method for view-based access control model | |
CN113508012A (en) | Vision system for a robotic machine | |
CN113689509A (en) | Binocular vision-based disordered grabbing method and system and storage medium | |
CN117021084A (en) | Workpiece grabbing method, device, system, electronic equipment and storage medium | |
CN115872018B (en) | Electronic tray labeling correction system and method based on 3D visual sensing | |
CN109895086A (en) | A kind of door of elevator snatch device and method of machine vision | |
CN116276938B (en) | Mechanical arm positioning error compensation method and device based on multi-zero visual guidance | |
CN110533717A (en) | A kind of target grasping means and device based on binocular vision | |
CN111563935B (en) | Visual positioning method for honeycomb holes of honeycomb sectional material | |
CN114918723A (en) | Workpiece positioning control system and method based on surface detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |