CN115106260A - Automatic gluing and detecting method based on 3D vision - Google Patents

Automatic gluing and detecting method based on 3D vision Download PDF

Info

Publication number
CN115106260A
CN115106260A CN202210825722.5A CN202210825722A CN115106260A CN 115106260 A CN115106260 A CN 115106260A CN 202210825722 A CN202210825722 A CN 202210825722A CN 115106260 A CN115106260 A CN 115106260A
Authority
CN
China
Prior art keywords
glue
vision
gluing
upper computer
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210825722.5A
Other languages
Chinese (zh)
Inventor
关肖州
刘贝贝
贺文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Alson Intelligent Technology Co ltd
Original Assignee
Henan Alson Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Alson Intelligent Technology Co ltd filed Critical Henan Alson Intelligent Technology Co ltd
Priority to CN202210825722.5A priority Critical patent/CN115106260A/en
Publication of CN115106260A publication Critical patent/CN115106260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B05SPRAYING OR ATOMISING IN GENERAL; APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05CAPPARATUS FOR APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05C11/00Component parts, details or accessories not specifically provided for in groups B05C1/00 - B05C9/00
    • B05C11/10Storage, supply or control of liquid or other fluent material; Recovery of excess liquid or other fluent material
    • B05C11/1002Means for controlling supply, i.e. flow or pressure, of liquid or other fluent material to the applying apparatus, e.g. valves
    • B05C11/1005Means for controlling supply, i.e. flow or pressure, of liquid or other fluent material to the applying apparatus, e.g. valves responsive to condition of liquid or other fluent material already applied to the surface, e.g. coating thickness, weight or pattern
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B05SPRAYING OR ATOMISING IN GENERAL; APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05CAPPARATUS FOR APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05C11/00Component parts, details or accessories not specifically provided for in groups B05C1/00 - B05C9/00
    • B05C11/10Storage, supply or control of liquid or other fluent material; Recovery of excess liquid or other fluent material
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B05SPRAYING OR ATOMISING IN GENERAL; APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05CAPPARATUS FOR APPLYING FLUENT MATERIALS TO SURFACES, IN GENERAL
    • B05C11/00Component parts, details or accessories not specifically provided for in groups B05C1/00 - B05C9/00
    • B05C11/10Storage, supply or control of liquid or other fluent material; Recovery of excess liquid or other fluent material
    • B05C11/1002Means for controlling supply, i.e. flow or pressure, of liquid or other fluent material to the applying apparatus, e.g. valves
    • B05C11/1015Means for controlling supply, i.e. flow or pressure, of liquid or other fluent material to the applying apparatus, e.g. valves responsive to a conditions of ambient medium or target, e.g. humidity, temperature ; responsive to position or movement of the coating head relative to the target
    • B05C11/1018Means for controlling supply, i.e. flow or pressure, of liquid or other fluent material to the applying apparatus, e.g. valves responsive to a conditions of ambient medium or target, e.g. humidity, temperature ; responsive to position or movement of the coating head relative to the target responsive to distance of target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/06Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
    • G01B11/0608Height gauges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the field of glue coating detection processes in the industrial field, and particularly relates to an automatic glue coating and detection method based on 3D vision, aiming at the problem that the quality of an adhesive tape is detected in real time by two or three 2D cameras, because the 2D cameras are greatly influenced by illumination, the detection is easy to make mistakes, and the detection can be carried out only after the glue coating is finished, so that the beat is increased; the automatic gluing and detecting method based on the 3D vision is provided, an embedded platform is arranged in the 3D vision, point cloud data can be output in real time through a network, glue height, glue width and glue volume can be effectively detected, abnormal conditions such as glue breaking, glue leakage and wire drawing can be efficiently and accurately judged, and the production efficiency of enterprises is improved.

Description

Automatic gluing and detecting method based on 3D vision
Technical Field
The invention relates to the technical field of gluing detection processes in the industrial field, in particular to an automatic gluing and detecting method based on 3D vision; the method is widely applied to automobiles, electronic products and the like.
Background
With the rapid development of industrial production in China and the rapid improvement of the automation degree, the automatic gluing system gradually replaces manual gluing. The automatic gluing detection system on the market at present is divided into 2 types, the first type is to detect the quality of the adhesive tape in real time through two or three 2D cameras, because the 2D cameras are greatly influenced by illumination, the detection is easy to make mistakes, and the detection can be carried out only after the glue is coated, so that the beat is increased; the second is a plurality of cameras add a plurality of lasers, but needs every camera all to link to each other with the industrial computer is direct, and the winding displacement is more in disorder, has restricted industrial robot's working range, and can appear camera line joint and become flexible scheduling problem easily.
Disclosure of Invention
The automatic gluing and detecting method based on the 3D vision solves the problem that the quality of an adhesive tape is detected in real time through two or three 2D cameras in the prior art, and because the 2D cameras are greatly influenced by illumination, detection errors are easy to occur, and the detection can be performed only after the adhesive is coated, so that the beat is increased; the cameras and the lasers are required to be directly connected with the industrial personal computer, so that the wiring is disordered, the working range of the industrial robot is limited, and the problems that the line joints of the cameras are easy to loosen and the like can occur; the glue height, the glue width and the glue volume can be effectively detected, and the abnormal conditions such as glue breaking, glue leakage and wire drawing can be accurately judged efficiently.
In order to achieve the purpose, the invention adopts the following technical scheme:
an automatic gluing and detecting method based on 3D vision comprises the following steps:
s1: the 3D vision sensor is arranged at the tail end of a mechanical arm of the industrial robot, and is calibrated according to the robot and the 3D vision sensor before the system works;
s2: before the system works, dividing a path to be detected into N sections according to a gluing process, teaching the path to be detected by a robot, setting a signal point at each gluing completion position, and outputting a glue section number, a glue gun tail end pose, a glue gun pressure value and the like to upper computer vision software when a robot program is executed to the signal point position;
s3: before the system works, according to the glue segment information of S2, the upper computer vision software sets relevant parameters respectively;
s4: when the system works, the upper computer vision software opens the four cameras on the 3D vision sensor and sets the four laser lines to be normally bright;
s5: selecting two cameras in the gluing direction to collect images according to the glue segment parameters preset in S3, respectively extracting laser line data in the two camera images by using a 3D vision sensor, respectively generating point cloud data, and transmitting the point cloud data to upper computer software in real time;
s6: the method comprises the following steps that visual software of an upper computer receives point cloud data transmitted by a 3D sensor in real time, on one hand, the distance from a glue gun to a gluing position point is calculated according to the point cloud data of a front-end camera along a gluing direction, and the offset in the Z direction is calculated through a pid algorithm according to the calculated height in the Z direction and the preset height in the Z direction;
s7: calculating glue height, glue width and glue volume in real time by upper computer software according to point cloud data of a rear-end camera, wherein the glue height is the maximum height from a reference surface, the glue width is the projection of the glue strip on the reference surface, and the glue volume is calculated by a tangent plane method;
s8: and repeating the step S5 to realize real-time detection.
Preferably, in S1, the 3D vision sensor is provided with four cameras and four laser lines inside, the four cameras are distributed on the circumference at 90 degrees, the laser projection mode is vertical incidence mode, i.e., the incident light is vertical to the measured surface, the cameras acquire laser line images from the other side in an inclined manner, and the 3D vision sensor is provided with an embedded platform inside, which can receive images in real time and generate three-dimensional point cloud data, and transmit the three-dimensional point cloud data to the upper computer through a network.
Preferably, in S2, the upper computer is connected to the 3D vision sensor and the robot through a network cable, respectively.
Preferably, in S7, the upper computer software calculates the glue height, the glue width, and the glue volume in real time according to the point cloud data of the rear-end camera, where the glue height is the maximum height from the reference surface, the glue width is the projection of the glue strip on the reference surface, and the glue volume is calculated by a tangent plane method, and if the calculated glue height, glue width, and glue volume exceed the preset parameter range, the glue strip quality is considered to be unqualified, and the current glue strip NG is displayed on the upper computer vision software, and meanwhile, the upper computer software performs real-time splicing according to the point cloud data of the rear-end camera and the operating speed of the robot end glue gun, and the point cloud data of the glue strip NG displays red, so that an operator can conveniently judge the error position of the glue strip.
Preferably, in S1, the 3D vision sensor is installed at the end of a mechanical arm of the industrial robot, and before the system works, calibration is performed according to the robot and the 3D vision sensor, where the calibration method is a triangulation principle.
Preferably, in S3, before the system works, the upper computer vision software sets the glue height, glue width, glue product range, camera number, exposure, gain, Z-direction threshold, and other relevant parameters of each glue segment according to the glue segment information of S1.
Preferably, in S6, the upper computer vision software receives the point cloud data transmitted by the 3D sensor in real time, calculates, along the gluing direction, a distance from the glue gun to the gluing position point according to the point cloud data of the front-most camera, calculates an offset in the Z direction by a pid algorithm according to the calculated Z-direction height and a preset Z-direction height, and modifies the pose information of the end of the glue gun if the offset exceeds a preset range.
Preferably, the unqualified gluing quality includes abnormal conditions such as glue breaking, glue leakage and wire drawing. Compared with the prior art, the invention has the beneficial effects that:
according to the invention, the 3D vision sensor is connected with the upper computer through the network cable, so that the connection of a plurality of cameras with the upper computer is avoided, and the wiring complexity is reduced. Whether the adhesive tape is qualified or not is detected in the gluing process, and compared with the prior art that the adhesive tape is coated firstly and then detected, the working cycle is greatly reduced. According to different gluing processes, the method is suitable for a plurality of gluing tracks and can detect gluing sections. Host computer vision software can show in real time whether height, gluey width, colloid volume and current adhesive tape of gluing of current adhesive tape are qualified to carry out splice display with real-time detection's adhesive tape, carry out the red processing of mark to unqualified adhesive tape, make things convenient for operating personnel to know the current rubber coated condition directly perceived, improved enterprise production efficiency.
The invention can effectively detect the glue height, the glue width and the glue volume, efficiently and accurately judge abnormal conditions such as glue breaking, glue leakage, wire drawing and the like, improves the production efficiency of enterprises, reduces the production cost and increases the competitiveness of the enterprises.
Drawings
Fig. 1 is a flowchart of an automatic gluing and detecting method based on 3D vision according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments.
Referring to fig. 1, an automatic gluing and detecting method based on 3D vision includes the following steps:
s1: the 3D vision sensor is arranged at the tail end of a mechanical arm of the industrial robot, and is calibrated according to the robot and the 3D vision sensor before the system works;
s2: before the system works, dividing a path to be detected into N sections according to a gluing process, teaching the path to be detected by a robot, setting a signal point at each gluing completion position, and outputting a glue section number, a glue gun tail end pose, a glue gun pressure value and the like to upper computer vision software when a robot program is executed to the signal point position;
s3: before the system works, according to the glue segment information of S1, the upper computer vision software sets relevant parameters respectively;
s4: when the system works, the upper computer vision software opens the four cameras on the 3D vision sensor and sets the four laser lines to be normally bright;
s5: selecting two cameras in the gluing direction to collect images according to the glue segment parameters preset in S3, respectively extracting laser line data in the two camera images by using a 3D vision sensor, respectively generating point cloud data, and transmitting the point cloud data to upper computer software in real time;
s6: the method comprises the following steps that visual software of an upper computer receives point cloud data transmitted by a 3D sensor in real time, on one hand, the distance from a glue gun to a gluing position point is calculated according to the point cloud data of a front-end camera along a gluing direction, and the offset in the Z direction is calculated through a pid algorithm according to the calculated height in the Z direction and the preset height in the Z direction;
s7: calculating glue height, glue width and glue volume in real time by upper computer software according to point cloud data of a rear-end camera, wherein the glue height is the maximum height from a reference surface, the glue width is the projection of the glue strip on the reference surface, and the glue volume is calculated by a tangent plane method;
s8: and repeating the step S5 to realize real-time detection.
In this embodiment, in S1, the 3D vision sensor is equipped with four cameras and four laser lines, the four cameras are distributed on the circumference at 90 degrees, the laser projection mode is vertical incidence mode, that is, the incident light is perpendicular to the measured surface, the cameras acquire laser line images from the other side in an inclined manner, and the 3D vision sensor is equipped with an embedded platform, which can receive images in real time and generate three-dimensional point cloud data, and transmit the three-dimensional point cloud data to the host computer through the network.
In this embodiment, in S2, the upper computer is connected to the 3D vision sensor and the robot through the network cable, respectively.
In this embodiment, in S7, the upper computer software calculates the glue height, the glue width, and the glue volume in real time according to the point cloud data of the rear-end camera, where the glue height is the maximum height from the reference surface, the glue width is the projection of the glue strip on the reference surface, and the glue volume is calculated by a tangent plane method.
In this embodiment, in S1, the 3D vision sensor is installed at the end of a mechanical arm of an industrial robot, and before the system works, calibration is performed according to the robot and the 3D vision sensor, where the calibration method is a triangulation principle.
In this embodiment, in S3, before the system works, according to the glue segment information of S2, the upper computer vision software sets the glue height, glue width, glue product range, camera number, exposure, gain, Z-direction threshold, and other relevant parameters of each glue segment.
In this embodiment, in S6, the upper computer vision software receives the point cloud data transmitted by the 3D sensor in real time, and along the gluing direction, on one hand, calculates the distance from the glue gun to the gluing position point according to the point cloud data of the front-end camera, calculates the offset in the Z direction through the pid algorithm according to the calculated Z-direction height and the preset Z-direction height, and modifies the pose information of the end of the glue gun if the offset exceeds the preset range.
In this embodiment, the unqualified gluing quality includes abnormal conditions such as glue breaking, glue leakage, wire drawing and the like.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (8)

1. An automatic gluing and detecting method based on 3D vision is characterized by comprising the following steps:
s1: the 3D vision sensor is arranged at the tail end of a mechanical arm of the industrial robot, and is calibrated according to the robot and the 3D vision sensor before the system works;
s2: before the system works, dividing a path to be detected into N sections according to a gluing process, teaching the path to be detected by a robot, setting a signal point at each gluing completion position, and outputting a glue section number, a glue gun tail end pose and a glue gun pressure value to upper computer vision software when a robot program is executed to the signal point position;
s3: before the system works, according to the glue segment information of S2, the upper computer vision software sets relevant parameters respectively;
s4: when the system works, the upper computer vision software opens the four cameras on the 3D vision sensor and sets the four laser lines to be normally bright;
s5: selecting two cameras in the gluing direction to collect images according to the glue segment parameters preset in S3, respectively extracting laser line data in the two camera images by using a 3D vision sensor, respectively generating point cloud data, and transmitting the point cloud data to upper computer software in real time;
s6: the upper computer vision software receives the point cloud data transmitted by the 3D sensor in real time, on one hand, the distance from a glue gun to a gluing position point is calculated according to the point cloud data of the most front camera along the gluing direction, and the offset in the Z direction is calculated through a pid algorithm according to the calculated height in the Z direction and the preset height in the Z direction;
s7: calculating glue height, glue width and glue volume in real time by upper computer software according to point cloud data of a rear-end camera, wherein the glue height is the maximum height from a reference surface, the glue width is the projection of the glue strip on the reference surface, and the glue volume is calculated by a tangent plane method;
s8: and repeating the step S5 to realize real-time detection.
2. The automatic gluing and detecting method based on the 3D vision as claimed in claim 1, wherein in S1, the 3D vision sensor is provided with four cameras and four laser lines, the four cameras are distributed on the circumference at 90 degrees, and the 3D vision sensor is provided with an embedded platform capable of receiving images in real time and generating three-dimensional point cloud data and transmitting the three-dimensional point cloud data to an upper computer through a network.
3. The automatic gluing and detecting method based on 3D vision as claimed in claim 1, wherein in S2, the upper computer is connected with the 3D vision sensor and the robot through a net cord respectively.
4. The automatic gluing and detecting method based on 3D vision as claimed in claim 1, wherein in S7, the upper computer software calculates glue height, glue width and glue volume in real time according to the point cloud data of the rear-end camera, wherein the glue height is the maximum height from a reference surface, the glue width is the projection of the glue strip on the reference surface, the glue volume is calculated by a tangent plane method, if the calculated glue height, glue width and glue volume exceed the preset parameter range, the quality of the glue strip is considered to be unqualified, the current glue strip NG is displayed on the upper computer vision software, meanwhile, the upper computer software carries out real-time splicing according to the point cloud data of the rear-end camera and the running speed of the robot end glue gun, the point cloud data of the glue strip displays red color, and an operator can conveniently judge the position where the glue strip is wrong.
5. The automatic gluing and detecting method based on 3D vision as claimed in claim 1, wherein in S1, the 3D vision sensor is installed at the end of the mechanical arm of the industrial robot, and before the system works, calibration is performed according to the robot and the 3D vision sensor, and the calibration method is a triangulation principle.
6. The automatic gluing and detecting method based on 3D vision as claimed in claim 1, wherein in S3, before the system works, according to the glue segment information of S2, the upper computer vision software sets the glue height, glue width, glue volume range, and camera number, exposure, gain, and Z-direction threshold related parameters of each glue segment respectively.
7. The automatic gluing and detecting method based on the 3D vision as claimed in claim 1, wherein in S6, the upper computer vision software receives the point cloud data transmitted by the 3D sensor in real time, along the gluing direction, on one hand, the distance from the glue gun to the gluing position point is calculated according to the point cloud data of the front-end camera, on the other hand, the offset in the Z direction is calculated through a pid algorithm according to the calculated height in the Z direction and the preset height in the Z direction, and if the offset exceeds the preset range, the pose information of the tail end of the glue gun is modified.
8. The automatic gluing and detecting method based on the 3D vision is characterized in that the unqualified gluing quality comprises the abnormal conditions of glue breaking, glue leakage and wire drawing.
CN202210825722.5A 2022-07-14 2022-07-14 Automatic gluing and detecting method based on 3D vision Pending CN115106260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210825722.5A CN115106260A (en) 2022-07-14 2022-07-14 Automatic gluing and detecting method based on 3D vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210825722.5A CN115106260A (en) 2022-07-14 2022-07-14 Automatic gluing and detecting method based on 3D vision

Publications (1)

Publication Number Publication Date
CN115106260A true CN115106260A (en) 2022-09-27

Family

ID=83332209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210825722.5A Pending CN115106260A (en) 2022-07-14 2022-07-14 Automatic gluing and detecting method based on 3D vision

Country Status (1)

Country Link
CN (1) CN115106260A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100671024B1 (en) * 2005-09-16 2007-01-19 삼성중공업 주식회사 Method and system for conducting step welding of the welding robot by using laser vision sensor
JP2013240788A (en) * 2013-07-01 2013-12-05 Shibaura Mechatronics Corp Method and device for applying liquid drop
CN207280385U (en) * 2017-11-13 2018-04-27 易思维(天津)科技有限公司 A kind of robot coating three-dimensional information vision inspection apparatus
CN207300177U (en) * 2017-11-13 2018-05-01 易思维(天津)科技有限公司 A kind of three-dimensional gluing detection device in real time
CN209735961U (en) * 2018-11-08 2019-12-06 蔚来汽车有限公司 Monitoring system of gluing equipment
CN111451095A (en) * 2020-04-09 2020-07-28 深圳了然视觉科技有限公司 Real-time gluing quality detection and automatic glue supplementing technology based on vision
CN113941483A (en) * 2021-11-26 2022-01-18 深圳了然视觉科技有限公司 Glue coating quality detection and automatic glue supplementing system based on vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100671024B1 (en) * 2005-09-16 2007-01-19 삼성중공업 주식회사 Method and system for conducting step welding of the welding robot by using laser vision sensor
JP2013240788A (en) * 2013-07-01 2013-12-05 Shibaura Mechatronics Corp Method and device for applying liquid drop
CN207280385U (en) * 2017-11-13 2018-04-27 易思维(天津)科技有限公司 A kind of robot coating three-dimensional information vision inspection apparatus
CN207300177U (en) * 2017-11-13 2018-05-01 易思维(天津)科技有限公司 A kind of three-dimensional gluing detection device in real time
CN209735961U (en) * 2018-11-08 2019-12-06 蔚来汽车有限公司 Monitoring system of gluing equipment
CN111451095A (en) * 2020-04-09 2020-07-28 深圳了然视觉科技有限公司 Real-time gluing quality detection and automatic glue supplementing technology based on vision
CN113941483A (en) * 2021-11-26 2022-01-18 深圳了然视觉科技有限公司 Glue coating quality detection and automatic glue supplementing system based on vision

Similar Documents

Publication Publication Date Title
CN108982546B (en) Intelligent robot gluing quality detection system and method
US7746481B2 (en) Method for measuring center of rotation of a nozzle of a pick and place machine using a collimated laser beam
CN111451095A (en) Real-time gluing quality detection and automatic glue supplementing technology based on vision
CN105674880B (en) Contact net geometric parameter measurement method and system based on binocular principle
CN109579766A (en) A kind of product shape automatic testing method and system
CN112950604B (en) Information processing method and system for precise dispensing
CN104668738A (en) Cross type double-line laser vision sensing welding gun height real-time identification system and method
CN209027481U (en) A kind of intelligent checking system based on laser three-D profile measurer
CN105479786A (en) Embryo molding part lamination detecting device and method
CN110966956A (en) Binocular vision-based three-dimensional detection device and method
CN109444916A (en) The unmanned travelable area determining device of one kind and method
CN109580642B (en) Film material gluing surface defect analysis control system and method thereof
WO2023202031A1 (en) Welding method and apparatus, and electronic device and computer-readable storage medium
CN109916300A (en) The index point towards 3-D scanning based on online image procossing pastes indicating means
CN115106260A (en) Automatic gluing and detecting method based on 3D vision
CN115035031A (en) Defect detection method and device for PIN (personal identification number) PIN, electronic equipment and storage medium
CN112975907B (en) Visual detection method for arc-shaped welding seam and adhesive tape
CN110977260B (en) Intelligent repair welding system and follow-up repair welding method for body-in-white
CN110160455B (en) Clearance surface difference detection system
CN112379605A (en) Bridge crane semi-physical simulation control experiment system and method based on visual servo
CN111993420A (en) Fixed binocular vision 3D guide piece feeding system
CN106558070A (en) A kind of method and system of the visual tracking under the robot based on Delta
CN115342805A (en) High-precision robot positioning navigation system and navigation method
CN112509138B (en) LCOS-based high-precision three-dimensional reconstruction system for indoor plastering robot
CN114723160A (en) Measurement path planning method for on-line detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination