CN111091528A - Visual positioning system and method for automatic detection of filter screen of dust-free room - Google Patents
Visual positioning system and method for automatic detection of filter screen of dust-free room Download PDFInfo
- Publication number
- CN111091528A CN111091528A CN201811241887.8A CN201811241887A CN111091528A CN 111091528 A CN111091528 A CN 111091528A CN 201811241887 A CN201811241887 A CN 201811241887A CN 111091528 A CN111091528 A CN 111091528A
- Authority
- CN
- China
- Prior art keywords
- detection
- filter screen
- automatic detection
- dust
- free room
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Abstract
The invention discloses a visual positioning system for automatic detection of a filter screen of a dust free room, which comprises: the device comprises an image acquisition unit, an automatic detection unit and a detection execution unit. The image acquisition unit is used for shooting an image of the filter screen of the dust free room, then coordinate values of four corner points of the filter screen of the dust free room under a robot base coordinate system are acquired through an image recognition algorithm, the image acquisition unit transmits the calculated coordinate values into the automatic detection unit, the automatic detection unit plans a zigzag scanning path according to the coordinate values, and the detection execution unit completes corresponding path planning and executes scanning detection actions according to the acquired coordinates of the corner points of the filter screen of the dust free room.
Description
[ technical field ] A method for producing a semiconductor device
The invention relates to a building robot system and a control method, in particular to a visual positioning system for automatically detecting a filter screen of a dust-free room and a detection method thereof.
[ background of the invention ]
The wind speed of a clean room FFU (Fan Filter Unit) and the distance between detection equipment and the FFU are required to be about 50mm in leakage detection, the surface area of the FFU needs to be fully covered and does not exceed the boundary in the leakage detection, the position precision of the FFU cannot be controlled by the traditional manual detection, the accidental possibility of being influenced by human factors is large, and the reliability is poor.
In view of the above-mentioned drawbacks of the prior art, there is a need for an improved vision positioning system for automatic inspection of a clean room filter screen and an inspection method thereof.
[ summary of the invention ]
The technical problem to be solved by the invention is as follows: a visual positioning system for automatic detection of a filter screen in a clean room is provided.
In order to solve the above problems, the present invention can adopt the following technical scheme:
a visual positioning system for automatic detection of cleanroom screens, comprising: the robot comprises an image acquisition unit, an automatic detection unit and a detection execution unit, wherein the image acquisition unit is used for shooting an image of a dust-free room filter screen and then acquiring coordinate values of four corner points of the dust-free room filter screen under a robot base coordinate system through an image recognition algorithm; the image acquisition unit transmits the calculated coordinate values into the automatic detection unit, and the automatic detection unit plans a zigzag scanning path according to the coordinate values; and the detection execution unit finishes corresponding path planning and executes scanning detection actions according to the obtained coordinates of the corner points of the filter screen of the dust-free room.
Another technical problem to be solved by the present invention is: a visual positioning method for automatic detection of a filter screen in a clean room is provided.
In order to solve the above problems, the present invention can adopt the following technical scheme:
a visual positioning method for automatic detection of a filter screen of a dust free room comprises the following steps:
(1) providing an image acquisition module, wherein the image acquisition module acquires a picture of a filter screen of the dust-free room and calculates to obtain pixel coordinates of an angular point of the filter screen of the dust-free room through an image processing algorithm;
(2) providing a coordinate conversion module, wherein the coordinate conversion module calculates to obtain a numerical value of the filter screen corner point of the dust-free room under a robot base coordinate through a coordinate conversion algorithm;
(3) and providing an automatic detection module, wherein the automatic detection module automatically plans a track and automatically detects the track through a path planning algorithm.
Compared with the prior art, the invention has the following beneficial effects: the invention converts the pixel coordinates of the corner points obtained by image processing into the base coordinates of the automatic detection equipment, and then controls the detection probe to automatically complete the detection action according to the obtained coordinates.
[ description of the drawings ]
Fig. 1 is a schematic view of an automatic inspection robot for a clean room filter screen.
FIG. 2 is a schematic diagram of a visual alignment system for automatic inspection of cleanroom screens.
Fig. 3 is a schematic view of a visual algorithm process flow.
Fig. 4 is a schematic diagram of a coordinate transformation algorithm.
[ detailed description ] embodiments
As shown in fig. 1, an automatic inspection robot 100 for a clean room filter screen is provided below a clean room filter screen 90, and includes an inspection probe 91, a Z-direction lifting motor 92, an X-direction horizontal movement motor 93, a Y-direction horizontal movement motor 94, an XYZ-movement platform support 95, a camera 96, an electric cabinet 97, and a distance sensor 98. For a detailed description of such an automatic inspection robot for a filter screen of a clean room, refer to patent application No. CN 201810471214.5.
As shown in fig. 2, a visual positioning system 200 for automatic inspection of a clean room filter of an automatic inspection robot 100 for a clean room filter includes: an image acquisition unit 11, an automatic detection unit 12, and a detection execution unit 13.
The image obtaining unit 11 is configured to capture an image of the clean room filter screen 90, and then obtain coordinate values of four corner points of the clean room filter screen 90 in a robot base coordinate system through an image recognition algorithm, the image obtaining unit 11 transmits the calculated coordinate values into the automatic detection unit 12, and the automatic detection unit 12 plans a zigzag scanning path according to the coordinate values. The detection execution unit 13 completes the corresponding path planning and executes the scanning detection action according to the obtained corner coordinates of the clean room filter screen 90. The automatic detection unit 12 comprises a detection probe 91 and a distance sensor 98 located below the detection probe 91, the detection execution unit 13 comprises a Z-direction lifting motor/vertical lifting motor 92, and the detection execution unit 13 controls the Z-direction lifting motor 92 to drive the detection probe 91 to rise according to information of the distance sensor 98, so that the detection probe 91 is close to a preset distance with the to-be-detected dust-free chamber filter screen 90. Preferably, the predetermined distance is 5 cm.
The detection execution unit 13 includes an X-direction horizontal movement motor/horizontal lifting motor 93, and the detection execution unit 13 controls the X-direction horizontal movement motor 93 to drive the detection probe 91 to perform a horizontal direction scanning detection action. The detection executing unit 13 includes a Y-direction horizontal moving motor/transverse lifting motor 94, and the detection executing unit 13 controls the Y-direction horizontal moving motor 94 to drive the detection probe 91 to perform a transverse direction scanning detection action. The X-direction horizontal movement motor 93 and the Y-direction horizontal movement motor 84 cause the scanning detection action to be performed in accordance with the planned zigzag trajectory.
As shown in fig. 3, a visual positioning method for automatic detection of a clean room filter screen includes the following steps: (1) providing an image acquisition module 21, wherein the image acquisition module 21 acquires a picture of the filter screen 90 of the dust-free room, and the pixel coordinates of the corner point of the filter screen 90 of the dust-free room are calculated through an image processing algorithm; (2) providing a coordinate conversion module 22, wherein the coordinate conversion module 22 calculates to obtain the numerical value of the corner point of the dust free room filter screen 90 under the robot base coordinate through a coordinate conversion algorithm; (3) and providing an automatic detection module 23, wherein the automatic detection module 23 automatically plans a track and automatically detects the track through a path planning algorithm.
As shown in fig. 4, the coordinate conversion algorithm includes the steps of: (1) providing a calibration plate and a detection probe, wherein the calibration plate is fixed near the detection probe; (2) providing an automatic detection platform and a camera, and controlling the automatic detection platform to drive the calibration plate to move the camera to see the calibration plate clearly; (3) taking a clear image of the calibration plate and recording the X, Y, Z axis movement distance corresponding to the picture; (4) repeating the steps (1) to (3) to shoot a plurality of calibration board images at different positions and recording X, Y, Z axis moving distance; (5) providing a tool box computer, and inputting all pictures into the tool box computer; (6) and judging whether the error meets the requirement, and if so, performing coordinate conversion. In the step (6), if the error does not meet the requirement, repeating the steps (2) to (5). In the step (4), at least 20 images of the calibration plates at different positions are shot. The robot is a dust-free chamber filter screen automatic detection robot.
After the pixel coordinates of the corner point of the clean room filter screen 90 in the image frame are obtained by the image processing module 21, the pixel coordinates of the corner point are further converted to the lower value of the robot base coordinate system in the coordinate conversion module 22, so that the subsequent trajectory planning and automatic detection are facilitated.
The coordinate system of the robot system related to the coordinate transformation module 22 is defined as shown in fig. 4, in which the coordinate systems and transformation relations are defined as follows: { B }: a robot base coordinate system; { E }: detecting a probe end coordinate system; { K }: calibrating a coordinate system of the plate; { C }: a camera coordinate system.
The principle of the coordinate transformation algorithm shown in fig. 4 is divided into 2 parts: camera calibration and hand-eye calibration. Camera calibration determination of camera internal parameter MInAnd external reference matrixHand-eye calibration for determining transformation relation of camera coordinate system relative to robot base coordinate systemThe pixel coordinate P of the corner point of the filter screen 90 of the dust-free room in the image is obtained through the image processing modulePixelThen, the determined inverse M of the internal reference matrix is calibrated with the cameraIn -1Multiplying to obtain the coordinate P of the FFU corner point in the camera coordinate systemCamThen, the conversion relation determined by the calibration of the hand and the eyeMultiplying to obtain the coordinate value P of the FFU corner point in the robot base coordinate systemBaseAfter the coordinate value is obtained, the subsequent path planning and automatic detection can be carried out. The detailed process is as follows:
firstly, calibrating a camera. Fixing a calibration plate near the detection probe, adjusting the settings of the calibration plate and a camera, and shooting an image of the calibration plate;
controlling the automatic detection moving platform to drive the calibration plate to move, shooting 20 calibration images with different clear positions, and recording X, Y, Z three-axis moving distances corresponding to each image one by one; importing all shot images into a Matlab/OpenCV toolbox to calculate camera internal parameters MInAnd camera external reference of each imageThe camera internal parameter is a transformation relation between a camera intrinsic parameter representation image coordinate system and a camera coordinate system, and the camera external parameter is a variable parameter representing a transformation relation between each calibration plate position and the camera coordinate system;
and performing hand-eye calibration after the camera calibration is completed. According to the recorded moving distances of 20 groups X, Y, Z of three axes during camera calibration, the transformation relation between the corresponding terminal coordinate system { E } and the robot base coordinate system { B } is calculated according to the positive solution principle of robot kinematics
Extracting sum obtained when camera calibration is performed20 groups of external parameters in one-to-one correspondence
The transformation relation of the coordinate system of the calibration plate { K } relative to the coordinate system of the tail end of the robot { E } is obtained by combining the transformation relation and the external parameter matrix through a Dual quaternaries algorithm
Calculating the coordinate system { C } and the base of the cameraStandard system { B } transformation relation
Obtaining the pixel coordinate P of the FFU corner point in the image through an image processing modulePixelThen, calibrating the inverse M of the determined internal reference matrix with the cameraIn -1Multiplying to obtain the coordinate P of the FFU corner point in the camera coordinate systemCam;
PCamAndmultiplying to obtain a coordinate value P of the FFU angular point under the robot base coordinate systemBase;
PBaseThe incoming automatic detection unit starts path planning and performs detection scan actions.
The transformation relationship is as follows:
a: pose of detector end under base coordinate system
According to X, Y, Z moving distance recorded when 20 calibration board images are shot, 20 corresponding groups of transformation relations can be obtained according to a robot kinematics forward solution method:
b: pose of calibration plate under robot terminal coordinate system
C: pose of camera under coordinate system of calibration plate
When the camera is calibrated, 20 groups of transformation relations which correspond to the photographed calibration plate images one by one can be obtained:
d: pose of camera under robot base coordinate system
According to the vector relation: d is A, B, C
E: pose of FFU under camera coordinate system
Obtaining camera internal parameter matrix M when camera calibrationInAccording to the pixel coordinate P of the FFU corner point obtained by the image processing module in the image coordinate systemPixelObtaining the coordinate E ═ M of the FFU angular point under the camera coordinate systemIn -1·PPixel
F: pose of FFU under base coordinate system
The coordinate values of the FFU angular points under the robot base coordinate system can be obtained through the coordinate conversion:
compared with the prior art, the invention has the following beneficial effects: the invention converts the pixel coordinates of the corner points obtained by image processing into the base coordinates of the automatic detection equipment, and then controls the detection probe to automatically complete the detection action according to the obtained coordinates.
The above description is only one embodiment of the present invention, and not all or only one embodiment, and any equivalent changes to the technical solution of the present invention, which are made by a person skilled in the art through reading the description of the present invention, are covered by the claims of the present invention.
Claims (10)
1. A visual positioning system for automatic detection of a filter screen of a dust free room is characterized by comprising an image acquisition unit, an automatic detection unit and a detection execution unit, wherein the image acquisition unit is used for shooting an image of the filter screen of the dust free room and then acquiring coordinate values of four corner points of the filter screen of the dust free room under a robot base coordinate system through an image recognition algorithm; the image acquisition unit transmits the calculated coordinate values into the automatic detection unit, and the automatic detection unit plans a zigzag scanning path according to the coordinate values; and the detection execution unit finishes corresponding path planning and executes scanning detection actions according to the obtained coordinates of the corner points of the filter screen of the dust-free room.
2. The visual positioning system for automatic inspection of cleanroom screens of claim 1, wherein: the automatic detection unit comprises a detection probe and a distance sensor located below the detection probe, the detection execution unit comprises a vertical lifting motor, the detection execution unit controls the vertical lifting motor to drive the detection probe to ascend according to information of the distance sensor, and the detection probe is close to a preset distance from a filter screen of a dust-free chamber to be detected.
3. The visual positioning system for automatic inspection of cleanroom screens of claim 2, wherein: the predetermined distance is 5 cm.
4. The visual positioning system for automatic inspection of cleanroom screens of claim 2, wherein: the detection execution unit comprises a horizontal lifting motor, and the detection execution unit controls the horizontal lifting motor to drive the detection probe to perform scanning detection actions in the horizontal direction.
5. The visual positioning system for automatic inspection of cleanroom screens of claim 2, wherein: the detection execution unit comprises a transverse lifting motor, and the detection execution unit controls the transverse lifting motor to drive the detection probe to perform scanning detection actions in the transverse direction.
6. A visual positioning method for automatic detection of a filter screen of a dust free room is characterized by comprising the following steps:
(1) providing an image acquisition module, wherein the image acquisition module acquires a picture of a filter screen of the dust-free room and calculates to obtain pixel coordinates of an angular point of the filter screen of the dust-free room through an image processing algorithm;
(2) providing a coordinate conversion module, wherein the coordinate conversion module calculates to obtain a numerical value of the filter screen corner point of the dust-free room under a robot base coordinate through a coordinate conversion algorithm;
(3) and providing an automatic detection module, wherein the automatic detection module automatically plans a track and automatically detects the track through a path planning algorithm.
7. The visual positioning method for automatic detection of a cleanroom screen of claim 6, wherein: the coordinate conversion algorithm comprises the following steps:
(1) providing a calibration plate and a detection probe, wherein the calibration plate is fixed near the detection probe;
(2) providing an automatic detection platform and a camera, and controlling the automatic detection platform to drive the calibration plate to move the camera to see the calibration plate clearly;
(3) taking a clear image of the calibration plate and recording the X, Y, Z axis movement distance corresponding to the picture;
(4) repeating the steps (1) to (3) to shoot a plurality of calibration board images at different positions and recording X, Y, Z axis moving distance;
(5) providing a tool box computer, and inputting all pictures into the tool box computer;
(6) and judging whether the error meets the requirement, and if so, performing coordinate conversion.
8. The visual positioning method for automatic detection of a cleanroom screen of claim 6, wherein: in the step (6), if the error does not meet the requirement, repeating the steps (2) to (5).
9. The visual positioning method for automatic detection of a cleanroom screen of claim 6, wherein: in the step (4), at least 20 images of the calibration plates at different positions are shot.
10. The visual positioning method for automatic detection of a cleanroom screen of claim 6, wherein: in the step (2), the robot is a dust-free room filter screen automatic detection robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811241887.8A CN111091528A (en) | 2018-10-24 | 2018-10-24 | Visual positioning system and method for automatic detection of filter screen of dust-free room |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811241887.8A CN111091528A (en) | 2018-10-24 | 2018-10-24 | Visual positioning system and method for automatic detection of filter screen of dust-free room |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111091528A true CN111091528A (en) | 2020-05-01 |
Family
ID=70392065
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811241887.8A Pending CN111091528A (en) | 2018-10-24 | 2018-10-24 | Visual positioning system and method for automatic detection of filter screen of dust-free room |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111091528A (en) |
-
2018
- 2018-10-24 CN CN201811241887.8A patent/CN111091528A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110497187B (en) | Sun flower pattern assembly system based on visual guidance | |
CN106272426B (en) | The positioning of solar battery sheet series welding anterior optic and angle sensing device and detection method | |
CN105427288B (en) | A kind of scaling method and device of machine vision alignment system | |
CN106296711B (en) | A kind of multiaxis active alignment method of mobile phone camera module group | |
CN109940603B (en) | Point-of-arrival error compensation control method for inspection robot | |
CN109739239B (en) | Planning method for uninterrupted instrument recognition of inspection robot | |
CN108562250B (en) | Keyboard keycap flatness rapid measurement method and device based on structured light imaging | |
CN1584729A (en) | Image projection method and device | |
JP2014511772A (en) | Method to invalidate sensor measurement value after picking motion in robot system | |
CN111398172A (en) | 3D visual detection equipment and method | |
CN110641721B (en) | Boarding bridge parking method | |
CN109459984B (en) | Positioning and grabbing system based on three-dimensional point cloud and using method thereof | |
CN112749656B (en) | ORB feature matching and yolo-based air switch state detection method and device | |
CN111415585B (en) | Flexible screen bending and attaching machine | |
US20220230348A1 (en) | Method and apparatus for determining a three-dimensional position and pose of a fiducial marker | |
CN108748149A (en) | Based on deep learning without calibration mechanical arm grasping means under a kind of complex environment | |
CN109472778B (en) | Appearance detection method for towering structure based on unmanned aerial vehicle | |
CN108305848A (en) | A kind of wafer automatic station-keeping system and the loading machine including it | |
CN112307912A (en) | Method and system for determining personnel track based on camera | |
CN115984177A (en) | Machine vision detection device, control method thereof, control device, and storage medium | |
CN109541626B (en) | Target plane normal vector detection device and detection method | |
CN109352646B (en) | Automatic yarn loading and unloading method and system | |
JP5759271B2 (en) | Electronic component mounting equipment | |
CN116698753B (en) | Mini-LED panel defect detection equipment and method based on machine vision | |
CN109732601A (en) | A kind of automatic Calibration robot pose method and apparatus vertical with camera optical axis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200501 |