CN113483664A - Screen plate automatic feeding system and method based on line structured light vision - Google Patents

Screen plate automatic feeding system and method based on line structured light vision Download PDF

Info

Publication number
CN113483664A
CN113483664A CN202110817326.3A CN202110817326A CN113483664A CN 113483664 A CN113483664 A CN 113483664A CN 202110817326 A CN202110817326 A CN 202110817326A CN 113483664 A CN113483664 A CN 113483664A
Authority
CN
China
Prior art keywords
point cloud
screen
structured light
light vision
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110817326.3A
Other languages
Chinese (zh)
Other versions
CN113483664B (en
Inventor
王志远
邰凤阳
康庆
朱远鹏
王化明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Kepai Fali Intelligent System Co.,Ltd.
Original Assignee
Cubespace Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cubespace Co ltd filed Critical Cubespace Co ltd
Priority to CN202110817326.3A priority Critical patent/CN113483664B/en
Publication of CN113483664A publication Critical patent/CN113483664A/en
Application granted granted Critical
Publication of CN113483664B publication Critical patent/CN113483664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G35/00Mechanical conveyors not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G47/00Article or material-handling devices associated with conveyors; Methods employing such devices
    • B65G47/74Feeding, transfer, or discharging devices of particular kinds or types
    • B65G47/90Devices for picking-up and depositing articles or materials
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a screen plate automatic feeding system and method based on line structured light vision, wherein the method comprises the following steps: calibrating the system; when the AGV transports the screen board to the scanning area of the linear structure optical vision module, the PLC controls the rotation platform to rotate, the linear structure optical vision module on the rotation platform scans the screen board, and the scanned data is sent to the upper computer; the upper computer processes the screen panel data scanned by the line structured light vision module and sends the processed result to the robot; the robot snatchs the material loading to the screen board according to the processing result of host computer. This application has realized automatic material loading, greatly reduced workman's intensity of labour, has improved production efficiency and has guaranteed production quality simultaneously.

Description

Screen plate automatic feeding system and method based on line structured light vision
Technical Field
The invention belongs to the field of screen manufacturing, and particularly relates to an automatic screen plate feeding system and method based on line structured light vision.
Background
The screen is used as an important component of the traditional furniture in China for a long time. The screen is generally arranged at a remarkable position in a room, and has the functions of separation, beautification, wind shielding, coordination and the like. At present, the manufacturing process of the screen is mostly manual operation, a large amount of time and manpower are needed, and the operation precision can not be effectively guaranteed. The traditional machine can only act according to a programmed program due to the lack of the assistance of various sensors, the material placing position also needs to be set in advance, and random tasks such as grabbing randomly placed objects cannot be completed.
Disclosure of Invention
The embodiment of the application provides a screen plate automatic feeding system and method based on line structure light vision, which can perform automatic feeding, greatly reduce the labor intensity of workers, improve the production efficiency and ensure the production quality.
In a first aspect, an embodiment of the present application provides a screen panel automatic feeding system based on line structured light vision, including:
the system comprises a line structure light vision module, a rotary platform, a PLC (programmable logic controller), an upper computer, a robot and an AGV;
the AGV is used for transporting the screen plate to the scanning area of the linear structured light vision module;
the PLC is used for controlling the rotation of the rotating platform;
the line structured light vision module is fixed on the rotary platform and used for scanning the screen board and sending the scanned data to the upper computer;
the upper computer is used for processing the screen panel data scanned by the line structured light vision module and sending the processed result to the robot;
the robot is used for grabbing and feeding the screen plate according to the processing result of the upper computer.
The line structured light vision module comprises a CCD industrial camera, a straight red line laser and a light filter, a preset included angle is formed between the CCD industrial camera and the straight red line laser, and the light filter is installed in front of a lens of the CCD industrial camera.
In a second aspect, the present application provides a screen panel automatic feeding method based on line structure light vision, which utilizes the above screen panel automatic feeding system based on line structure light vision, and includes:
calibrating the system;
when the AGV transports the screen board to the scanning area of the linear structure optical vision module, the PLC controls the rotation platform to rotate, the linear structure optical vision module on the rotation platform scans the screen board, and the scanned data is sent to the upper computer;
the upper computer processes the screen panel data scanned by the line structured light vision module and sends the processed result to the robot;
and the robot grabs and feeds the screen plate according to the processing result of the upper computer.
Wherein, the host computer handles the screen panel data of line structure light vision module scanning includes:
the upper computer processes the screen plate data scanned by the line structured light vision module to obtain the pose of the screen plate under a camera coordinate system;
and converting the pose of the screen plate under the camera coordinate system into the pose under the robot coordinate system.
Wherein, the host computer is handled the screen board data that line structure light vision module scanned obtains the position appearance of screen board under the camera coordinate system, includes:
assuming that (a, b, c) is a point on a rotating shaft of the rotating platform, (u, v, w) is a direction vector of a rotating axis, and theta is a rotating angle, a coordinate transformation matrix for single-frame line scanning point cloud splicing under a camera coordinate system is as follows:
Figure BDA0003170642700000031
performing point cloud down-sampling by using an improved voxel filtering algorithm, creating a three-dimensional voxel grid for input point cloud data, and representing all points in a voxel by using a point closest to a voxel gravity center point in original point cloud data;
carrying out primary plane model segmentation on the point cloud data through a sampling consistency algorithm, and then continuously segmenting through a region growing method to remove vertical cluster points and noise points to finally obtain a skeleton of the point cloud of the screen plate;
and carrying out registration of the screen plate skeleton point cloud.
Wherein, the registration of the screen plate skeleton point cloud comprises the following steps:
extracting the boundary of the screen panel framework point cloud by a latitude and longitude scanning method, then performing straight line fitting on four edges of the screen panel framework by using an RANSAC algorithm, and then calculating the spatial coordinate values of four angular points of the screen panel framework;
performing rough registration based on Euclidean distance constraint, and performing point cloud rough registration on the extracted angular points and the model angular points by using an ICP (inductively coupled plasma) algorithm; let p1And p2The corresponding point pairs on the coarse registration result and the target point cloud, c1And c2Respectively, the geometric centers of the rough registration point cloud and the target point cloud, wherein delta is a distance constraint threshold, and if the distance constraint threshold is met:
Figure BDA0003170642700000032
then consider p to be1And p2If the matching effect meets the requirement, the matching effect is considered not to meet the requirement, and the corresponding point pair is removed;
and introducing fine registration of the weight coefficients and the iteration factors.
The method comprises the following steps of extracting a boundary of a screen panel framework point cloud by a latitude and longitude scanning method, performing straight line fitting on four edges of the screen panel framework by using an RANSAC algorithm, and calculating space coordinate values of four angular points of the screen panel framework, wherein the method comprises the following steps:
solving the maximum value x of the point cloud data point x coordinate valuemaxAnd the minimum value xmin(ii) a Given a resolution r, the calculated division step Δ x is (x)max-xmin) R; scanning the point cloud, and counting the x coordinate value at [ x ]min+(i-1)Δx,xmin+ i Δ x) (i ═ 1,2, L, r) rangeThe inner y coordinate is the point when the minimum value and the maximum value are taken; similarly, scanning the point cloud again along the y direction, and forming a point cloud boundary by using the results of the two scans;
giving a distance threshold value d, and performing straight line fitting on four edges of the screen plate skeleton by using a RANSAC algorithm;
and calculating the space coordinates of four vertexes of the screen plate according to an equation of a straight line where four edges of the framework of the screen plate are located.
Wherein, the fine registration of the weight coefficient and the iteration factor is introduced, which comprises the following steps:
s3.4.3.1: giving an original point cloud P and a target point cloud Q, and initializing a transformation matrix H0=H*In which H is*As a result of the coarse registration, the weight coefficient α > 1, the dynamic iteration factor m =0, and the iteration number k = 0;
s3.4.3.2: by the amount of change Δ H of the pose matrixkTo update the original point cloud P;
s3.4.3.3: searching each point in the original point cloud P for the closest point in the target point cloud Q, and reordering the target point set according to the closest point;
s3.4.3.4: by passing
Figure BDA0003170642700000041
Solving the variation delta H of the pose matrixk+1Wherein p isi、qiPoints on the point clouds P and Q, np、n'pRespectively the number of points of a non-interesting area and an interesting area on the point cloud P, wherein the weight coefficient alpha is more than 1;
s3.4.3.5: if the root mean square distance error err increases, making m equal to m + 1; otherwise, making m equal to 0;
s3.4.3.6: if m > 0, perform Hk+1=ΔHk+1·HkM times to solve the pose transformation matrix;
s3.4.3.7: steps S3.4.3.2-S3.4.3.6 are repeated until the root mean square distance error err is less than a given value or the number of iterations k reaches a maximum value.
Wherein, turn into the position appearance under the robot coordinate system with the position appearance of screen board under the camera coordinate system, include:
pose matrix of robot end effector under robot base coordinate system in grabbing state
Figure BDA0003170642700000042
The pose conversion relation between the tail end of the robot and the paw is obtained by calibrating a tool coordinate system
Figure BDA0003170642700000043
Defining the pose conversion relation between the target part and the paw under the grabbing gesture according to the size structure of the part and the manipulator
Figure BDA0003170642700000044
Obtaining a pose matrix of the grasped screen plate in a camera coordinate system through point cloud registration
Figure BDA0003170642700000045
The pose conversion relation between the camera coordinate system and the robot base coordinate system is obtained through hand-eye calibration
Figure BDA0003170642700000051
Then by calculating:
Figure BDA0003170642700000052
and calculating a pose matrix of the robot end effector under the robot base coordinate system in the grabbing state.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program is used for implementing the steps of any one of the above methods when executed by a processor.
The screen plate automatic feeding system and method based on line structured light vision have the following beneficial effects:
this application screen wind board automatic feeding method based on line structure light vision includes: calibrating the system; when the AGV transports the screen board to the scanning area of the linear structure optical vision module, the PLC controls the rotation platform to rotate, the linear structure optical vision module on the rotation platform scans the screen board, and the scanned data is sent to the upper computer; the upper computer processes the screen panel data scanned by the line structured light vision module and sends the processed result to the robot; the robot snatchs the material loading to the screen board according to the processing result of host computer. This application has realized automatic material loading, greatly reduced workman's intensity of labour, has improved production efficiency and has guaranteed production quality simultaneously.
Drawings
Fig. 1 is a schematic structural diagram of a screen panel automatic feeding system based on line structured light vision according to the present application;
FIG. 2 is a schematic structural diagram of another screen panel automatic feeding system based on line structured light vision according to the present application;
fig. 3 is a schematic flow chart of a screen plate automatic feeding method based on line structured light vision in the embodiment of the present application;
FIG. 4 is a flow chart of the visual positioning software of the present application;
FIG. 5 is a first flowchart of a point cloud registration algorithm in the present application;
FIG. 6 is a second flowchart of a point cloud registration algorithm in the present application;
FIG. 7 is a screen plate skeleton result diagram obtained by point cloud segmentation in the present application;
FIG. 8.1 is a first diagram of the result of point cloud registration in the present application;
fig. 8.2 is a second diagram illustrating the result of point cloud registration in the present application.
Detailed Description
The present application is further described with reference to the following figures and examples.
In the following description, the terms "first" and "second" are used for descriptive purposes only and are not intended to indicate or imply relative importance. The following description provides embodiments of the invention, which may be combined or substituted for various embodiments, and this application is therefore intended to cover all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes feature A, B, C and another embodiment includes feature B, D, then this application should also be considered to include an embodiment that includes one or more of all other possible combinations of A, B, C, D, even though this embodiment may not be explicitly recited in text below.
The following description provides examples, and does not limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements described without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For example, the described methods may be performed in an order different than the order described, and various steps may be added, omitted, or combined. Furthermore, features described with respect to some examples may be combined into other examples.
The screen is used as an important component of the traditional furniture in China for a long time. The screen is generally arranged at a remarkable position in a room, and has the functions of separation, beautification, wind shielding, coordination and the like. At present, the manufacturing process of the screen is mostly manual operation, a large amount of time and manpower are needed, and the operation precision can not be effectively guaranteed. The traditional machine can only act according to a programmed program due to the lack of the assistance of various sensors, the material placing position also needs to be set in advance, and random tasks such as grabbing randomly placed objects cannot be completed. Therefore, it is urgently needed to develop an apparatus capable of performing automatic loading and unloading, so as to greatly reduce the labor intensity of workers, improve the production efficiency and ensure the production quality.
The invention relates to a screen panel automatic feeding system and method based on line structured light vision. The system consists of a line structured light vision module, a rotating platform, a PLC controller, a robot and an AGV. The method comprises the following specific steps: the method comprises the following steps of calibration of a measuring system (including camera calibration, structured light plane calibration, rotary platform axis calibration and robot hand-eye calibration), structured light center line extraction, three-dimensional point cloud data generation, point cloud data processing and pose estimation. The invention selects the industrial CCD camera with high resolution, drives the line structure light vision measuring system through the rotating platform, and can carry out large-scale scanning and high-precision non-contact measurement. After three-dimensional point cloud data of a scanned object (a screen plate) is obtained, pose information of the screen plate under a camera coordinate system is obtained through processing such as sampling, segmentation and registration, and pose information of the screen plate under a robot coordinate system is obtained through calibration, so that grabbing and feeding are performed. The invention can be applied to a production line, and can be used for scanning a screen plate to generate point cloud and judging the pose by a machine vision technology, so that automatic loading and unloading can be carried out, the labor intensity of workers can be greatly reduced, and meanwhile, the production efficiency can be improved and the production quality can be ensured.
As shown in fig. 1-2, the screen panel automatic feeding system based on line structured light vision of the present application includes: line structured light vision module 12, rotary platform 11, PLC (Programmable Logic Controller) Controller 10, upper computer 13, robot 14, and AGV 15.
An AGV (Automated Guided Vehicle, AGV) is also commonly referred to as an AGV. The present invention relates to a transport vehicle equipped with an electromagnetic or optical automatic navigation device, capable of traveling along a predetermined navigation route, and having safety protection and various transfer functions. The industrial application does not need a driver's transport vehicle, and a rechargeable storage battery is used as a power source of the industrial application. Generally, the traveling path and behavior can be controlled by a computer, or the traveling path is set up by using an electromagnetic path (electromagnetic path-following system), the electromagnetic path is fixed on the floor, and the unmanned transport vehicle moves and acts according to the information brought by the electromagnetic path.
The AGV15 is used to transport the screen panels 16 to the line structured light vision module 12 scanning area; the PLC 10 is used for controlling the rotation of the rotary platform 11; the line structured light vision module 12 is fixed on the rotary platform 11 and used for scanning the screen board and sending the scanned data to the upper computer 13; the upper computer 13 is used for processing the screen panel data scanned by the linear structured light vision module 12 and sending the processed result to the robot 14; and the robot 14 is used for grabbing and feeding the screen plate according to the processing result of the upper computer 13.
The line structured light vision module 12 includes a CCD (charge coupled device) industrial camera, a line red laser, and a filter. The CCD industrial camera and the linear red line laser form a certain included angle and the relative position is fixed. The filter is a narrow-band red light filter and is arranged in front of a lens of the CCD industrial camera. The upper computer 13 is, for example, a computer. As shown in FIG. 2, vision sensor 121 scans the panels being transported on AGV 15.
In some embodiments, the visual positioning software runs in the upper computer, and comprises a system calibration module, an image processing module, a point cloud processing module, a PLC control module and a robot control module. The system calibration module comprises camera calibration, structured light plane calibration, rotary platform axis calibration and robot eye calibration. The image processing module mainly separates the linear structure light stripe from the background and extracts the central line of the linear structure light stripe. The point cloud processing module is used for splicing, down-sampling, segmenting and registering the point cloud data of the screen plate generated by scanning.
This application screen aerofoil automatic feeding system based on line structure light vision can carry out automatic feeding, greatly reduced workman's intensity of labour, improved production efficiency simultaneously and guaranteed production quality.
As shown in fig. 3-8.2, the present application provides a method for automatically feeding a screen panel based on line structured light vision, and the system for automatically feeding a screen panel based on line structured light vision includes: s101, calibrating a system; s103, when the AGV transports the screen board to a scanning area of the linear structure light vision module, the PLC controls the rotation platform to rotate, the linear structure light vision module on the rotation platform scans the screen board, and scanned data are sent to the upper computer; s105, the upper computer processes the screen panel data scanned by the linear structured light vision module and sends the processed result to the robot; s107, the robot grabs and feeds the screen plate according to the processing result of the upper computer. As described in detail below.
And S101, calibrating the system (if the system is calibrated, the system is not needed).
The method comprises the following steps: calibrating a camera, namely calibrating the camera by a Zhang calibration method to obtain internal parameters and external parameters of the camera;
calibrating a structured light plane, namely calibrating the structured light plane by adopting a direct method, and fitting by using a least square method to obtain an equation of the structured light plane;
calibrating the axis of the rotating platform, obtaining external parameters by means of camera calibration and fitting a space circle to obtain coordinates of a plurality of points on the rotating axis of the rotating platform, and then fitting an equation of the rotating axis by a least square method;
calibrating the hands and eyes of the robot, and realizing the calibration of the hands and eyes of the robot outside the hands by a Tsai-Lenz algorithm;
and calibrating a tool coordinate system, determining the position of the tool coordinate system through TCP calibration, and determining the posture of the tool coordinate system through TCF calibration.
S103, when the AGV transports the screen board to the scanning area of the linear structure light vision module, the PLC controls the rotation of the rotating platform, the linear structure light vision module on the rotating platform scans the screen board, and the scanned data are sent to the upper computer.
When the AGV transports the screen plate to the lower part of the linear structure light scanning system, the linear structure light scanning system is started, and the PLC controls the rotating platform to rotate, so that the whole scanning of the screen plate is completed.
And S105, processing the screen panel data scanned by the linear structured light vision module by the upper computer, and sending the processed result to the robot.
The screen board data that line structure light vision module scanned is handled to host computer includes: s1051, processing the screen panel data scanned by the line structured light vision module by the upper computer to obtain the pose of the screen panel in a camera coordinate system; and S1052, converting the pose of the screen plate under the camera coordinate system into the pose under the robot coordinate system.
And S1051, processing the screen plate data scanned by the line structured light vision module by the upper computer to obtain the pose of the screen plate in the camera coordinate system.
In the step, after the vision module finishes scanning, a point cloud processing and pose estimation module of software is operated to obtain the pose of the screen plate in a camera coordinate system.
Assuming that (a, b, c) is a point on a rotating shaft of the rotating platform, (u, v, w) is a direction vector of a rotating axis, and theta is a rotating angle, a coordinate transformation matrix for single-frame line scanning point cloud splicing under a camera coordinate system is as follows:
Figure BDA0003170642700000101
point cloud down-sampling using an improved voxel filtering algorithm. Voxel filtering is performed by creating a three-dimensional voxel grid of the input point cloud data, with the center of gravity of all points in each voxel approximately representing all points within the voxel. Since the point is not necessarily a point in the original point cloud, the loss of fine features in the original point cloud is caused. Therefore, the point closest to the voxel gravity center point in the original point cloud data can be used for replacing the voxel gravity center point, so that the expression accuracy of the point cloud data is improved. And performing point cloud down-sampling by using an improved voxel filtering algorithm, creating a three-dimensional voxel grid for the input point cloud data, and representing all points in the voxel by using the point closest to the center of gravity of the voxel in the original point cloud data.
And point cloud segmentation, namely performing primary plane model segmentation on the point cloud data through a sampling consistency algorithm, and then continuously segmenting through a region growing method to remove vertical cluster points and noise points to finally obtain a skeleton of the screen plate point cloud.
And Point cloud registration, namely performing registration of the screen board skeleton Point cloud by using an improved ICP (Iterative Closest Point) algorithm.
The step of point cloud registration comprises:
extracting point cloud characteristics, namely extracting the boundary of the point cloud of the screen panel framework by a latitude and longitude scanning method, then performing linear fitting on four edges of the screen panel framework by using an RANSAC algorithm, and then calculating the space coordinate values of four angular points of the screen panel framework;
performing rough registration based on Euclidean distance constraint, and performing point cloud rough registration on the extracted angular points and the model angular points by using an ICP (inductively coupled plasma) algorithm; let p1And p2The corresponding point pairs on the coarse registration result and the target point cloud, c1And c2Respectively, the geometric centers of the rough registration point cloud and the target point cloud, wherein delta is a distance constraint threshold, and if the distance constraint threshold is met:
Figure BDA0003170642700000102
then consider p to be1And p2If the matching effect meets the requirement, the matching effect is considered not to meet the requirement, and the corresponding point pair is removed;
point cloud fine registration, wherein fine registration of a weight coefficient and an iteration factor is introduced, and the pose obtained by coarse registration is finely adjusted, so that the registration result is more accurate. In order to improve the accuracy of registration and increase the robustness of the algorithm, a weight coefficient alpha and a dynamic iteration factor m are introduced.
The method for extracting the point cloud features comprises the following steps:
solving the maximum value x of the point cloud data point x coordinate valuemaxAnd the minimum value xmin(ii) a Given a resolution r, the calculated division step Δ x is (x)max-xmin) R; scanning the point cloud, and counting the x coordinate value at [ x ]min+(i-1)Δx,xminA point at which the y coordinate in the range of + i Δ x) (i ═ 1,2, L, r) takes a minimum value and a maximum value; scanning the point cloud once again along the y direction by imitating the steps, and forming the boundary of the point cloud by using the results of the two scans;
giving a distance threshold value d, and performing straight line fitting on four edges of the screen plate skeleton by using a RANSAC algorithm;
and calculating the space coordinates of four vertexes of the screen plate according to an equation of a straight line where four edges of the framework of the screen plate are located.
And (4) calculating the intersection point of the two space straight lines, namely the middle point of the two vertical feet of the two space straight lines, which is the minimum common vertical line. Is provided with two different plane straight lines L1And L2,P0、P1Is L1Two points of (1), Q0、Q1Is L2At two points above, a and m are arbitrary constants.
Straight line l1And l2Can be expressed as
P=aP0+(1-a)P1
Q=mQ0+(1-m)Q1
Wherein P and Q are each L1And L2Point (c) above. Calculating L1And L2Is solved by the shortest distance
min(P-Q)2
And then converted into an equation for solving the hyperstatic equation
Ax=b
Wherein
A=(P0-P1,Q0-Q1),x=(a,-m)T,b=Q1-P1
Can find out
x=(ATA)-1ATb
Further, the coordinates of P and Q are (x)P,yP,zP)、(xQ,yQ,zQ)。
Finally obtain L1,L2Coordinates of the intersection point of
Figure BDA0003170642700000121
The method for accurately registering the point cloud comprises the following steps:
s3.4.3.1 setting original point cloud P and target point cloud Q, initializing transformation matrix H0=H*In which H is*As a result of the coarse registration, the weight coefficient α is greater than 1, the dynamic iteration factor m =0, and the iteration number k is 0;
s3.4.3.2 Change amount Δ H by pose matrixkTo update the original point cloud P;
s3.4.3.3 searching the nearest point in the target point cloud Q for each point in the original point cloud P, and reordering the target point set according to the nearest point;
s3.4.3.4 through
Figure BDA0003170642700000122
Solving the variation delta H of the pose matrixk+1Wherein p isi、qiPoints on the point clouds P and Q, np、n'pRespectively the number of points of a non-interesting area and an interesting area on the point cloud P, wherein the weight coefficient alpha is more than 1;
s3.4.3.5, if the root mean square error err increases, let m be m + 1; otherwise, making m equal to 0;
s3.4.3.6 if m > 0, perform Hk+1=ΔHk+1·HkThe pose transformation matrix is solved for m times;
s3.4.3.7 repeat steps S3.4.3.2-S3.4.3.6 until the root mean square distance error err is less than a given value or the number of iterations k reaches a maximum value.
And S1052, converting the pose of the screen plate under the camera coordinate system into the pose under the robot coordinate system.
Pose matrix of robot end effector under robot base coordinate system in grabbing state
Figure BDA0003170642700000123
The pose conversion relation between the tail end of the robot and the paw is obtained by calibrating a tool coordinate system
Figure BDA0003170642700000124
Defining the pose conversion relation between the target part and the paw under the grabbing gesture according to the size structure of the part and the manipulator
Figure BDA0003170642700000125
Obtaining a pose matrix of the grasped screen plate in a camera coordinate system through point cloud registration
Figure BDA0003170642700000126
The pose conversion relation between the camera coordinate system and the robot base coordinate system is obtained through hand-eye calibration
Figure BDA0003170642700000127
Then by calculating:
Figure BDA0003170642700000131
and calculating a pose matrix of the robot end effector under the robot base coordinate system in the grabbing state.
S107, the robot grabs and feeds the screen plate according to the processing result of the upper computer.
The controller (of the robot) obtains the control signal from the host computer, and the robot is controlled to grab and feed the screen plate.
The invention has the following beneficial effects: first, compared with the currently used worker feeding method, the method only needs a simple visual system consisting of an industrial camera, a line laser and the like, and can save production cost and improve production efficiency in practical application. Secondly, the invention uses the sampling consistency and the region growing algorithm to carry out point cloud segmentation, carries out pose estimation by a point cloud template matching method, and has certain adaptability to the deformation condition of the screen plate.
In the present application, the embodiment of the screen panel automatic feeding method based on the line structured light vision is basically similar to the embodiment of the screen panel automatic feeding system based on the line structured light vision, and related points can be referred to each other.
It is clear to a person skilled in the art that the solution according to the embodiments of the invention can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, an FPGA (Field-Programmable Gate Array), an IC (Integrated Circuit), or the like.
The embodiment of the invention also provides a computer readable storage medium, which stores a computer program, and the program is executed by a processor to realize the steps of the screen panel automatic feeding method based on the line structure light vision. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
All functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The utility model provides a screen aerofoil automatic feeding system based on line structure light vision which characterized in that includes: the system comprises a line structure light vision module, a rotary platform, a PLC (programmable logic controller), an upper computer, a robot and an AGV;
the AGV is used for transporting the screen plate to the scanning area of the linear structured light vision module;
the PLC is used for controlling the rotation of the rotating platform;
the line structured light vision module is fixed on the rotary platform and used for scanning the screen board and sending the scanned data to the upper computer;
the upper computer is used for processing the screen panel data scanned by the line structured light vision module and sending the processed result to the robot;
the robot is used for grabbing and feeding the screen plate according to the processing result of the upper computer.
2. The screen panel automatic feeding system based on line structured light vision of claim 1, wherein the line structured light vision module comprises a CCD industrial camera, a line red laser and a filter, a preset included angle is formed between the CCD industrial camera and the line red laser, and the filter is installed in front of a lens of the CCD industrial camera.
3. The method for automatically feeding the screen panel based on the line structure light vision is characterized in that the system for automatically feeding the screen panel based on the line structure light vision, which is disclosed by claim 1 or 2, comprises the following steps:
calibrating the system;
when the AGV transports the screen board to the scanning area of the linear structure optical vision module, the PLC controls the rotation platform to rotate, the linear structure optical vision module on the rotation platform scans the screen board, and the scanned data is sent to the upper computer;
the upper computer processes the screen panel data scanned by the line structured light vision module and sends the processed result to the robot;
and the robot grabs and feeds the screen plate according to the processing result of the upper computer.
4. The line structured light vision based screen panel automatic feeding method according to claim 3, wherein the upper computer processes the screen panel data scanned by the line structured light vision module, and the method comprises the following steps:
the upper computer processes the screen plate data scanned by the line structured light vision module to obtain the pose of the screen plate under a camera coordinate system;
and converting the pose of the screen plate under the camera coordinate system into the pose under the robot coordinate system.
5. The automatic screen board feeding method based on the line structured light vision of claim 4, wherein the upper computer processes the screen board data scanned by the line structured light vision module to obtain the pose of the screen board in a camera coordinate system, and the method comprises the following steps:
assuming that (a, b, c) is a point on a rotating shaft of the rotating platform, (u, v, w) is a direction vector of a rotating axis, and theta is a rotating angle, a coordinate transformation matrix for single-frame line scanning point cloud splicing under a camera coordinate system is as follows:
Figure FDA0003170642690000021
performing point cloud down-sampling by using an improved voxel filtering algorithm, creating a three-dimensional voxel grid for input point cloud data, and representing all points in a voxel by using a point closest to a voxel gravity center point in original point cloud data;
carrying out primary plane model segmentation on the point cloud data through a sampling consistency algorithm, and then continuously segmenting through a region growing method to remove vertical cluster points and noise points to finally obtain a skeleton of the point cloud of the screen plate;
and carrying out registration of the screen plate skeleton point cloud.
6. The line structured light vision-based screen panel automatic feeding method of claim 5, wherein the registering of the screen panel skeleton point cloud comprises:
extracting the boundary of the screen panel framework point cloud by a latitude and longitude scanning method, then performing straight line fitting on four edges of the screen panel framework by using an RANSAC algorithm, and then calculating the spatial coordinate values of four angular points of the screen panel framework;
performing rough registration based on Euclidean distance constraint, and performing point cloud rough registration on the extracted angular points and the model angular points by using an ICP (inductively coupled plasma) algorithm; let p1And p2The corresponding point pairs on the coarse registration result and the target point cloud, c1And c2Respectively, the geometric centers of the rough registration point cloud and the target point cloud, wherein delta is a distance constraint threshold, and if the distance constraint threshold is met:
Figure FDA0003170642690000031
then consider p to be1And p2If the matching effect meets the requirement, the matching effect is considered not to meet the requirement, and the corresponding point pair is removed;
and introducing fine registration of the weight coefficients and the iteration factors.
7. The method for automatically feeding screen boards based on line structured light vision according to claim 6, wherein the method comprises the steps of extracting the point cloud boundary of the screen board framework by a latitude and longitude scanning method, then performing line fitting on four sides of the screen board framework by using RANSAC algorithm, and then calculating the spatial coordinate values of four corner points of the screen board framework, wherein the method comprises the following steps:
solving the maximum value x of the point cloud data point x coordinate valuemaxAnd the minimum value xmin(ii) a Given a resolution r, the calculated division step Δ x is (x)max-xmin) R; scanning the point cloud, and counting the x coordinate value at [ x ]min+(i-1)Δx,xminA point at which the y coordinate in the range of + i Δ x) (i ═ 1,2, L, r) takes a minimum value and a maximum value; similarly, scanning the point cloud again along the y direction, and forming a point cloud boundary by using the results of the two scans;
giving a distance threshold value d, and performing straight line fitting on four edges of the screen plate skeleton by using a RANSAC algorithm;
and calculating the space coordinates of four vertexes of the screen plate according to an equation of a straight line where four edges of the framework of the screen plate are located.
8. The line structured light vision-based screen panel automatic feeding method of claim 6, wherein the introducing of the fine registration of the weight coefficients and the iteration factors comprises:
s3.4.3.1: giving an original point cloud P and a target point cloud Q, and initializing a transformation matrix H0=H*In which H is*As a result of the coarse registration, the weight coefficient α is greater than 1, the dynamic iteration factor m is 0, and the iteration number k is 0;
s3.4.3.2: by variation of the matrix of positions and posturesΔHkTo update the original point cloud P;
s3.4.3.3: searching each point in the original point cloud P for the closest point in the target point cloud Q, and reordering the target point set according to the closest point;
s3.4.3.4: by passing
Figure FDA0003170642690000032
Solving the variation delta H of the pose matrixk+1Wherein p isi、qiPoints on the point clouds P and Q, np、n'pRespectively the number of points of a non-interesting area and an interesting area on the point cloud P, wherein the weight coefficient alpha is more than 1;
s3.4.3.5: if the root mean square distance error err increases, making m equal to m + 1; otherwise, making m equal to 0;
s3.4.3.6: if m > 0, perform Hk+1=ΔHk+1·HkM times to solve the pose transformation matrix;
s3.4.3.7: steps S3.4.3.2-S3.4.3.6 are repeated until the root mean square distance error err is less than a given value or the number of iterations k reaches a maximum value.
9. The method for automatically feeding the screen plate based on the line structured light vision according to any one of claims 4 to 8, wherein the step of converting the pose of the screen plate under a camera coordinate system into the pose under a robot coordinate system comprises the following steps:
pose matrix of robot end effector under robot base coordinate system in grabbing state
Figure FDA0003170642690000041
The pose conversion relation between the tail end of the robot and the paw is obtained by calibrating a tool coordinate system
Figure FDA0003170642690000042
Defining the pose conversion relation between the target part and the paw under the grabbing gesture according to the size structure of the part and the manipulator
Figure FDA0003170642690000043
Obtaining a pose matrix of the grasped screen plate in a camera coordinate system through point cloud registration
Figure FDA0003170642690000044
The pose conversion relation between the camera coordinate system and the robot base coordinate system is obtained through hand-eye calibration
Figure FDA0003170642690000045
Then by calculating:
Figure FDA0003170642690000046
and calculating a pose matrix of the robot end effector under the robot base coordinate system in the grabbing state.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 3 to 9.
CN202110817326.3A 2021-07-20 2021-07-20 Screen plate automatic feeding system and method based on line structured light vision Active CN113483664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110817326.3A CN113483664B (en) 2021-07-20 2021-07-20 Screen plate automatic feeding system and method based on line structured light vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110817326.3A CN113483664B (en) 2021-07-20 2021-07-20 Screen plate automatic feeding system and method based on line structured light vision

Publications (2)

Publication Number Publication Date
CN113483664A true CN113483664A (en) 2021-10-08
CN113483664B CN113483664B (en) 2022-10-21

Family

ID=77942321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110817326.3A Active CN113483664B (en) 2021-07-20 2021-07-20 Screen plate automatic feeding system and method based on line structured light vision

Country Status (1)

Country Link
CN (1) CN113483664B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387347A (en) * 2021-10-26 2022-04-22 浙江智慧视频安防创新中心有限公司 Method and device for determining external parameter calibration, electronic equipment and medium
CN117140627A (en) * 2023-10-30 2023-12-01 诺梵(上海)系统科技股份有限公司 Screen production line

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093543A (en) * 2007-06-13 2007-12-26 中兴通讯股份有限公司 Method for correcting image in 2D code of quick response matrix
CN101334267A (en) * 2008-07-25 2008-12-31 西安交通大学 Digital image feeler vector coordinate transform calibration and error correction method and its device
CN103424086A (en) * 2013-06-30 2013-12-04 北京工业大学 Image collection device for internal surface of long straight pipe
CN105067023A (en) * 2015-08-31 2015-11-18 中国科学院沈阳自动化研究所 Panorama three-dimensional laser sensor data calibration method and apparatus
CN105115560A (en) * 2015-09-16 2015-12-02 北京理工大学 Non-contact measurement method for cabin capacity
CN108180825A (en) * 2016-12-08 2018-06-19 中国科学院沈阳自动化研究所 A kind of identification of cuboid object dimensional and localization method based on line-structured light
CN109249392A (en) * 2018-08-31 2019-01-22 先临三维科技股份有限公司 Calibration method, calibration element, device, equipment and the medium of workpiece grabbing system
CN109272537A (en) * 2018-08-16 2019-01-25 清华大学 A kind of panorama point cloud registration method based on structure light
CN109489548A (en) * 2018-11-15 2019-03-19 河海大学 A kind of part processing precision automatic testing method using three-dimensional point cloud
CN109559338A (en) * 2018-11-20 2019-04-02 西安交通大学 A kind of three-dimensional point cloud method for registering estimated based on Weighted principal component analysis and M
CN109900204A (en) * 2019-01-22 2019-06-18 河北科技大学 Large forgings size vision measurement device and method based on line-structured light scanning
CN109927036A (en) * 2019-04-08 2019-06-25 青岛小优智能科技有限公司 A kind of method and system of 3D vision guidance manipulator crawl
CN109934859A (en) * 2019-03-18 2019-06-25 湖南大学 It is a kind of to retrace the ICP method for registering for stating son based on feature enhancing multi-dimension Weight
KR20190073244A (en) * 2017-12-18 2019-06-26 삼성전자주식회사 Image processing method based on iterative closest point (icp) technique
CN110335297A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on feature extraction
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique
CN110455189A (en) * 2019-08-26 2019-11-15 广东博智林机器人有限公司 A kind of vision positioning method and transfer robot of large scale material
CN110728623A (en) * 2019-08-27 2020-01-24 深圳市华讯方舟太赫兹科技有限公司 Cloud point splicing method, terminal equipment and computer storage medium
CN111062938A (en) * 2019-12-30 2020-04-24 科派股份有限公司 Plate expansion plug detection system and method based on machine learning
CN111553938A (en) * 2020-04-29 2020-08-18 南京航空航天大学 Multi-station scanning point cloud global registration method based on graph optimization
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method
CN111820545A (en) * 2020-06-22 2020-10-27 浙江理工大学 Method for automatically generating sole glue spraying track by combining offline and online scanning
CN112053432A (en) * 2020-09-15 2020-12-08 成都贝施美医疗科技股份有限公司 Binocular vision three-dimensional reconstruction method based on structured light and polarization

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093543A (en) * 2007-06-13 2007-12-26 中兴通讯股份有限公司 Method for correcting image in 2D code of quick response matrix
CN101334267A (en) * 2008-07-25 2008-12-31 西安交通大学 Digital image feeler vector coordinate transform calibration and error correction method and its device
CN103424086A (en) * 2013-06-30 2013-12-04 北京工业大学 Image collection device for internal surface of long straight pipe
CN105067023A (en) * 2015-08-31 2015-11-18 中国科学院沈阳自动化研究所 Panorama three-dimensional laser sensor data calibration method and apparatus
CN105115560A (en) * 2015-09-16 2015-12-02 北京理工大学 Non-contact measurement method for cabin capacity
CN108180825A (en) * 2016-12-08 2018-06-19 中国科学院沈阳自动化研究所 A kind of identification of cuboid object dimensional and localization method based on line-structured light
KR20190073244A (en) * 2017-12-18 2019-06-26 삼성전자주식회사 Image processing method based on iterative closest point (icp) technique
CN109272537A (en) * 2018-08-16 2019-01-25 清华大学 A kind of panorama point cloud registration method based on structure light
CN109249392A (en) * 2018-08-31 2019-01-22 先临三维科技股份有限公司 Calibration method, calibration element, device, equipment and the medium of workpiece grabbing system
CN109489548A (en) * 2018-11-15 2019-03-19 河海大学 A kind of part processing precision automatic testing method using three-dimensional point cloud
CN109559338A (en) * 2018-11-20 2019-04-02 西安交通大学 A kind of three-dimensional point cloud method for registering estimated based on Weighted principal component analysis and M
CN109900204A (en) * 2019-01-22 2019-06-18 河北科技大学 Large forgings size vision measurement device and method based on line-structured light scanning
CN109934859A (en) * 2019-03-18 2019-06-25 湖南大学 It is a kind of to retrace the ICP method for registering for stating son based on feature enhancing multi-dimension Weight
CN109927036A (en) * 2019-04-08 2019-06-25 青岛小优智能科技有限公司 A kind of method and system of 3D vision guidance manipulator crawl
CN110335297A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on feature extraction
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique
CN110455189A (en) * 2019-08-26 2019-11-15 广东博智林机器人有限公司 A kind of vision positioning method and transfer robot of large scale material
CN110728623A (en) * 2019-08-27 2020-01-24 深圳市华讯方舟太赫兹科技有限公司 Cloud point splicing method, terminal equipment and computer storage medium
CN111062938A (en) * 2019-12-30 2020-04-24 科派股份有限公司 Plate expansion plug detection system and method based on machine learning
CN111553938A (en) * 2020-04-29 2020-08-18 南京航空航天大学 Multi-station scanning point cloud global registration method based on graph optimization
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method
CN111820545A (en) * 2020-06-22 2020-10-27 浙江理工大学 Method for automatically generating sole glue spraying track by combining offline and online scanning
CN112053432A (en) * 2020-09-15 2020-12-08 成都贝施美医疗科技股份有限公司 Binocular vision three-dimensional reconstruction method based on structured light and polarization

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘晓慧 等: "《应用改进IRLS-ICP的植株点云配准》", 《计算机工程与设计》 *
徐德 等: "《机器人视觉测量与控制》", 31 May 2011, 国防工业出版社 *
徐胜润: "《基于激光传感器的板式家具特征提取和三维重构算法研究》", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》 *
薛连杰: "《移动机器人基于三维点云的物体尺寸和方位识别研究》", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387347A (en) * 2021-10-26 2022-04-22 浙江智慧视频安防创新中心有限公司 Method and device for determining external parameter calibration, electronic equipment and medium
CN114387347B (en) * 2021-10-26 2023-09-19 浙江视觉智能创新中心有限公司 Method, device, electronic equipment and medium for determining external parameter calibration
CN117140627A (en) * 2023-10-30 2023-12-01 诺梵(上海)系统科技股份有限公司 Screen production line
CN117140627B (en) * 2023-10-30 2024-01-26 诺梵(上海)系统科技股份有限公司 Screen production line

Also Published As

Publication number Publication date
CN113483664B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN111775152B (en) Method and system for guiding mechanical arm to grab scattered stacked workpieces based on three-dimensional measurement
CN112417591B (en) Vehicle modeling method, system, medium and equipment based on holder and scanner
US9707682B1 (en) Methods and systems for recognizing machine-readable information on three-dimensional objects
JP5788460B2 (en) Apparatus and method for picking up loosely stacked articles by robot
CN112060087B (en) Point cloud collision detection method for robot to grab scene
CN113483664B (en) Screen plate automatic feeding system and method based on line structured light vision
JP5469216B2 (en) A device for picking up bulk items by robot
US10102629B1 (en) Defining and/or applying a planar model for object detection and/or pose estimation
CN110580725A (en) Box sorting method and system based on RGB-D camera
CN113096094B (en) Three-dimensional object surface defect detection method
CN109559341B (en) Method and device for generating mechanical arm grabbing scheme
CN110980276B (en) Method for implementing automatic casting blanking by three-dimensional vision in cooperation with robot
JP2010207989A (en) Holding system of object and method of detecting interference in the same system
CN111462154A (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
CN114474056B (en) Monocular vision high-precision target positioning method for grabbing operation
JPWO2020144784A1 (en) Image processing equipment, work robots, substrate inspection equipment and sample inspection equipment
TW201714695A (en) Flying laser marking system with real-time 3D modeling and method thereof
CN113532277A (en) Method and system for detecting plate-shaped irregular curved surface workpiece
EP4023398A1 (en) Information processing device, configuration device, image recognition system, robot system, configuration method, learning device, and learned model generation method
Premachandra et al. A study on hovering control of small aerial robot by sensing existing floor features
JP5544464B2 (en) 3D position / posture recognition apparatus and method for an object
CN110232710B (en) Article positioning method, system and equipment based on three-dimensional camera
CN110363801B (en) Method for matching corresponding points of workpiece real object and three-dimensional CAD (computer-aided design) model of workpiece
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN115131208A (en) Structured light 3D scanning measurement method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230512

Address after: 226000 No.9 Longyou Road, Wuyao Town, Rugao City, Nantong City, Jiangsu Province

Patentee after: Jiangsu Kepai Fali Intelligent System Co.,Ltd.

Address before: 225000 KEPAI Co., Ltd., No. 11, Jingang Road, Yangzhou City, Jiangsu Province

Patentee before: CUBESPACE CO.,LTD.