CN108326879B - Automatic machining system and machining method of robot based on 3D vision - Google Patents

Automatic machining system and machining method of robot based on 3D vision Download PDF

Info

Publication number
CN108326879B
CN108326879B CN201810284246.4A CN201810284246A CN108326879B CN 108326879 B CN108326879 B CN 108326879B CN 201810284246 A CN201810284246 A CN 201810284246A CN 108326879 B CN108326879 B CN 108326879B
Authority
CN
China
Prior art keywords
lens
laser
fixing block
module
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810284246.4A
Other languages
Chinese (zh)
Other versions
CN108326879A (en
Inventor
邹小平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yitai 3d Technology Co ltd
Original Assignee
Shenzhen Yitai 3d Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yitai 3d Technology Co ltd filed Critical Shenzhen Yitai 3d Technology Co ltd
Priority to CN201810284246.4A priority Critical patent/CN108326879B/en
Publication of CN108326879A publication Critical patent/CN108326879A/en
Application granted granted Critical
Publication of CN108326879B publication Critical patent/CN108326879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Manipulator (AREA)

Abstract

The utility model discloses an automatic processing system and a processing method of a robot based on 3D vision. The 3D vision module is used for carrying out rapid 3D laser scanning on an object to be processed on the conveyor belt, 2 to 8 different visual angles can be formed based on 1 to 3 CCD sensors in the 3D vision module, 3D measurement is carried out on front and rear ends including the bottom surface and the side surfaces of the object, high-precision three-dimensional data in multiple directions are obtained at one time, high-precision surface shape data and placing postures of the object to be detected are obtained, the control module is used for controlling the robot automatic processing module to carry out self-adaptive track processing operation, and the robot automatic processing module has good adaptability and intelligence while reducing installation space and improves processing quality; the optical path formed by the optical lenses adopts a folding design, the imaging optical path is folded to the horizontal direction from the height direction and the width direction, and the equipment size is reduced under the condition of ensuring the scanning depth of field.

Description

Automatic machining system and machining method of robot based on 3D vision
Technical Field
The utility model belongs to the technical field of processing, and particularly relates to an automatic processing system and a processing method of a robot based on 3D vision.
Background
Robots are recognized as multi-purpose and highly flexible, faithful and highly automated machines, and as technology advances, robots are becoming more and more widely used in processing technology.
Among them, robots have been used for more than 10 years in footwear manufacturing. The domestic shoe industry is currently mainly concerned with economic benefits by virtue of low-cost labor, yield and scale. The root cause is that the domestic shoe industry design lacks innovation and high-level independent development capability, the shoe manufacturing productivity is behind, the automation level is very low, and many processes are not free from manual operations, such as glue spraying and polishing of soles and uppers. The shoe sole and vamp are both glued and polished in shoe making industry, and the traditional method is to use manual glue spraying and polishing, and the glue and dust are harmful to human body, so the automatic glue spraying and polishing technology of robots is started in industry.
In the Chinese patent of the utility model entitled "automatic shoemaking and glue spraying System for on-line glue spraying, detection and screening", with the bulletin No. CN 204908214U, an automatic shoemaking and glue spraying System for on-line glue spraying, detection and screening is disclosed, which comprises a frame body, and is characterized in that: the frame body is provided with three first conveying belt bodies, a second conveying belt body and a third conveying belt body which are sequentially connected, wherein the first conveying belt body and the third conveying belt body are fixedly arranged and are respectively positioned at the left side and the right side of the second conveying belt body, and the second conveying belt body is also provided with a lifting device which enables the second conveying belt body to lift and move along the vertical direction; the machine frame body at the position of the first conveyor belt body is sequentially provided with a robot glue spraying device for spraying glue to soles or vamps and a glue spraying identification device for identifying glue spraying effects of products, the bottom of the second conveyor belt body is provided with a collecting frame device for recycling unqualified glue spraying products, and the machine frame body at the position of the third conveyor belt body is provided with a drying device for drying and forming the glue spraying products; the shoemaking spouts gluey system still includes a control system, and this control system is used for the connection control three conveyer belt body, robot spouts gluey device, spouts gluey recognition device, drying device and elevating gear's work. However, the glue spraying device adopts a 2D visual positioning technology, and can only provide positioning data in two directions XY, so that a complete 3D overall view of the object can not be obtained.
Disclosure of Invention
In order to solve the problems, the primary object of the utility model is to provide an automatic processing system of a robot based on 3D vision and a processing method thereof, wherein a 3D vision module is used for carrying out rapid 3D laser scanning on an object to be processed on a conveyor belt, 2 to 8 different viewing angles can be formed based on 1 to 3 CCD sensors in the 3D vision module, 3D measurement can be carried out on front and rear ends including the bottom surface and the side surface of the object to be processed at the same time, a plurality of high-precision three-dimensional data in different directions can be obtained at one time, thus high-precision surface shape data and placing posture of the object to be detected can be obtained, and a control module is used for controlling the automatic processing module of the robot to carry out self-adaptive track processing operation.
Another object of the present utility model is to provide an automatic processing system of a robot based on 3D vision and a processing method thereof, in which an optical path formed by the plurality of optical lenses is folded from a height direction and a width direction to a horizontal direction by adopting a folding design, so that a size of a device is reduced under the condition of ensuring a scanning depth of field. The utility model effectively reduces the equipment cost and improves the stability of the equipment.
In order to achieve the above purpose, the technical scheme of the utility model is as follows:
the utility model provides an automatic processing system of a robot based on 3D vision, which comprises:
the conveying belt is used for bearing and conveying the object to be processed to a designated position;
the 3D vision module is used for carrying out rapid 3D laser scanning on the object to be processed on the conveyor belt to obtain high-precision three-dimensional surface shape data;
the control module intelligently extracts required processing tracks according to the three-dimensional surface shape data, and further orders track planning to generate control instructions capable of driving the robot automatic processing module to move;
the robot automatic processing module processes an object to be processed according to a specified control instruction;
the 3D vision module is installed on the conveyor belt, the 3D vision module is electrically connected with the control module, and the robot automatic processing module is electrically connected with the control module. According to the utility model, the system carries out rapid 3D laser scanning on the object to be processed on the conveyor belt through the 3D vision module to obtain high-precision three-dimensional surface shape data, so that the high-precision surface shape data and the placing gesture of the object to be detected can be obtained, and the control module is used for controlling the robot automatic processing module to carry out self-adaptive track processing operations, such as glue spraying and polishing, according to the size and the placing gesture of the object to be detected, so that the system has better adaptability and intelligence while reducing the installation space, and the processing quality is improved;
the 3D vision module comprises a shell, a support frame, an optical imaging module, a connecting assembly and a driving motor, wherein a cavity for accommodating the support frame, the optical imaging module, the connecting assembly and the driving motor is formed in the shell, the shell is arranged on the conveyor belt, the optical imaging module is positioned on the upper portion of the cavity in the shell, the optical imaging module is fixedly connected with the connecting assembly, the connecting assembly is movably connected with the support frame, and the driving motor is in driving connection with the optical imaging module through the connecting assembly. According to the system, the optical imaging module is used for carrying out rapid 3D laser scanning on the object to be processed on the conveyor belt to obtain high-precision three-dimensional surface shape data, so that the high-precision surface shape data and the placing posture of the object to be detected can be obtained, and the processing quality is improved.
Further, the connecting assembly comprises a guide rod and a pulley, the guide rod is in driving connection with the pulley, the pulley is fixed on the driving motor, the optical imaging module is fixedly connected with the guide rod, and the driving motor drives the guide rod through the pulley to drive the optical imaging module to scan back and forth, so that a plurality of contour lines are formed, and complete surface shape information of the surface of an object is formed. In the utility model, the structure ensures that the system has the characteristics of high measurement precision and non-contact measurement.
Further, the optical imaging module can be set with four visual angles, and comprises a left side line laser, a right side line laser, a CCD sensor, a plurality of optical lenses and a fixing component, wherein the left side line laser, the right side line laser, the CCD sensor and the plurality of optical lenses are fixedly connected through the fixing component; the left side line laser emits laser to the object to form a left front view angle direction light path and a left back view angle direction light path through reflection of a plurality of optical lenses, and the right side line laser emits laser to the object to form a right front view angle direction light path and a right back view angle direction light path through reflection of a plurality of optical lenses. In the utility model, the processing system can form 4 different visual angles based on 1 CCD sensor in the optical module, and can simultaneously perform 3D measurement on the front end and the rear end including the bottom surface and the side surface of an object to be processed, and obtain high-precision three-dimensional data in 4 directions at one time; the optical paths formed by the plurality of optical lenses are folded to be horizontal from the height direction and the width direction, so that the size of the device is reduced under the condition of ensuring the scanning depth of field. The utility model effectively reduces the equipment cost and improves the stability of the equipment;
the fixing component comprises a left laser fixing plate, a right laser fixing plate, a lens fixing block and a CCD fixing block, wherein the left side line laser is fixed on the left laser fixing plate, the left laser fixing plate is fixed on the lens fixing block, the right side line laser is fixed on the right laser fixing plate, the right laser fixing plate is fixed on the lens fixing block, the CCD sensor is fixed on the CCD fixing block, the lens fixing block is provided with a clamping groove matched with the optical lenses, and the optical lenses are clamped and fixed by the clamping groove. In the utility model, the structure arrangement ensures that the 3D imaging module structure of the processing system is more stable and is convenient to use.
Further, the plurality of optical lenses comprise a first lens, a second lens, a third lens, a fourth lens, a fifth lens and a sixth lens, wherein the lens fixing blocks comprise a first fixing block, a second fixing block, a third fixing block and a fourth fixing block, the fourth fixing block is fixedly connected with the third fixing block, and the first fixing block and the second fixing block are oppositely arranged;
the left laser fixing plate is fixed on the first fixing block, the right laser fixing plate is fixed on the second fixing block, one ends of the first lens, the second lens and the third lens are fixedly connected with the first fixing block, the other ends of the first lens, the second lens and the third lens are fixedly connected with the second fixing block, the first lens, the second lens and the third lens are positioned between the first fixing block and the second fixing block, the first lens, the second lens and the third lens are sequentially arranged, one end of the fourth lens is fixedly connected with the first fixing block, the other end of the fourth lens is fixedly connected with the third fixing block, one end of the fifth lens is fixedly connected with the third fixing block, the other end of the fifth lens is fixedly connected with the second fixing block, and the sixth lens is fixedly arranged on the fourth fixing block. According to the utility model, the 3D imaging module can better realize the formed light path folding design, the imaging light path is folded to the horizontal direction from the height direction and the width direction, the equipment size is reduced under the condition of ensuring the scanning depth of field, the equipment cost is effectively reduced, and meanwhile, the stability of the equipment is improved.
Further, the left line laser projects laser to an object to be detected, a left front view direction light path is formed, the first laser line is reflected to the fourth lens through the first lens, then reflected to the sixth lens through the fourth lens, and then reflected to the CCD sensor through the sixth lens for imaging.
Further, the left line laser projects laser to an object to be detected, a left back view angle direction light path is formed, the light path passes through the second laser line, is reflected to the second lens through the third lens, is reflected to the fourth lens through the second lens, is reflected to the sixth lens through the fourth lens, and is reflected to the CCD sensor through the sixth lens for imaging.
Further, the right side line laser projects laser to an object to be detected, a right front view angle direction light path is formed, the light path passes through the third laser line, is reflected to the fifth lens through the first lens, and then is reflected to the CCD sensor through the fifth lens for imaging.
Further, the right side line laser projects laser to an object to be detected, a right back view angle direction light path is formed, the light path passes through the fourth laser line, is reflected to the second lens through the third lens, is reflected to the fifth lens through the second lens, and is reflected to the CCD sensor through the fifth lens for imaging.
Further, the 3D vision module further includes a carrier glass fixed on an upper portion of the housing. In the utility model, when the 3D vision module is installed vertically, an object to be measured can be placed on the bearing glass; when the 3D vision module is installed upside down, the bearing glass can protect the light path system to form a sealing structure.
Further, the control module may be a computer or other control terminal.
Further, related software is arranged in the control module, the control module analyzes the three-dimensional surface shape data obtained by the 3D vision module through the related software, intelligently extracts required processing tracks, further sorts track planning, and generates a control instruction capable of driving the robot to move, so that the robot automatic processing module is controlled to process an object to be processed according to the designated control instruction. The related software is the existing control software technology, and can realize the functions.
Further, the object to be processed may be a shoe or other suitable processing object, and the processing may be glue spraying or polishing or other suitable processing technology.
In order to achieve the above object, the present utility model further provides a processing method of an automatic processing system of a robot based on 3D vision, comprising the steps of:
step 1, placing an object to be processed on a conveyor belt, and conveying the object to be processed to a designated position through the conveyor belt;
step 2, carrying out rapid 3D laser scanning on an object to be processed on the conveyor belt through a 3D vision module to obtain high-precision three-dimensional surface shape data;
step 3, the control module intelligently extracts required processing tracks according to the high-precision three-dimensional surface shape data obtained in the step 2, further sorts track planning, generates a control instruction capable of driving the robot automatic processing module to move and then sends the control instruction to the robot automatic processing module;
and 4, the robot automatic processing module receives the control instruction sent by the control module in the step 3, and then processes the object to be processed according to the designated control instruction.
Further, in step 2, the 3D vision module includes shell, support frame, optical imaging module, coupling assembling, driving motor, the inside formation of shell holds support frame, optical imaging module, coupling assembling and driving motor's cavity, the shell is installed on the conveyer belt, optical imaging module is located the upper portion of the inside cavity of shell, optical imaging module with coupling assembling fixed connection, coupling assembling with support frame swing joint, driving motor passes through coupling assembling with optical imaging module drive connection.
In the utility model, the processing system can form a plurality of different visual angles based on the inside of the optical module, and can simultaneously perform 3D measurement on the front end and the rear end including the bottom surface and the side surface of an object to be processed, and high-precision three-dimensional data in a plurality of directions can be acquired at one time; the optical paths formed by the plurality of optical lenses are folded to be horizontal from the height direction and the width direction, so that the size of the device is reduced under the condition of ensuring the scanning depth of field. The utility model effectively reduces the equipment cost and improves the stability of the equipment.
Further, the optical imaging module can also be set with two visual angles, and comprises a shell, a line laser, a CCD sensor, a fixed bracket, a laser reflection lens and an imaging reflection lens, wherein the line laser, the CCD sensor, the fixed bracket, the laser reflection lens and the imaging reflection lens are all positioned in the shell, and the laser reflection lens and the imaging reflection lens are all fixed on the fixed bracket; the line laser and the CCD sensor are arranged at the left end in the shell in a vertical side-by-side manner, and the laser reflecting mirror and the imaging reflecting mirror are distributed at the right end in the shell;
the line laser is used as a structured light source and is projected onto the surface of an object to be detected, a bright line laser light knife is formed, the direction of a laser incident light path is changed through the laser reflection lens, then a reflection light path with front and rear visual angles is formed through the imaging reflection lens, and finally the line laser light knife enters the CCD sensor to be used for imaging laser.
In the present utility model, the optical imaging module may also be a separate four-view arrangement;
in the present utility model, the optical imaging module may be a separate two-view arrangement;
in the present utility model, the optical imaging module may be a two-view setting plus a four-view setting to form a six-view setting;
in the present utility model, the optical imaging module may also be a set of 2 two viewing angles plus a set of four viewing angles to form an eight viewing angle set.
The utility model has the beneficial effects that: compared with the prior art, the system carries out rapid 3D laser scanning on the object to be processed on the conveyor belt through the 3D vision module, 2 to 8 different visual angles can be formed based on 1 to 3 CCD sensors in the 3D vision module, 3D measurement can be carried out on the front end and the rear end including the bottom surface and the side surface of the object to be processed at the same time, high-precision three-dimensional data in multiple directions can be obtained at one time, high-precision surface shape data and placing postures of the object to be detected can be obtained, and the control module controls the robot automatic processing module to carry out self-adaptive track processing operation, so that the system has better adaptability and intelligence while reducing installation space and improves processing quality; the optical path formed by the plurality of optical lenses of the system adopts a folding design, the imaging optical path is folded to the horizontal direction from the height direction and the width direction, and the equipment size is reduced under the condition of ensuring the scanning depth of field. The utility model effectively reduces the equipment cost and improves the stability of the equipment.
Drawings
Fig. 1 is a schematic view of an embodiment of an automated processing system of a 3D vision-based robot of the present utility model.
Fig. 2 is a schematic diagram of an embodiment of the 3D vision module of the present utility model.
Fig. 3 is a schematic diagram of a four-view embodiment of an optical imaging module of the present utility model.
Fig. 4 is a left front view direction optical path diagram of a four view of the optical imaging module of the present utility model.
Fig. 5 is a left rear view angle direction optical path diagram of a four view angle of the optical imaging module of the present utility model.
Fig. 6 is a right front view direction optical path diagram of four views of the optical imaging module of the present utility model.
Fig. 7 is a right rear view angle direction optical path diagram of a four view angle of the optical imaging module of the present utility model.
Fig. 8 is a schematic diagram of an embodiment of a four-directional reflection optical path of a four-view angle of an optical imaging module of the present utility model.
Fig. 9 is a schematic diagram of an embodiment of the optical imaging module of the present utility model with two viewing angles.
Fig. 10 is a schematic diagram of an embodiment of a 2-directional reflection optical path of two viewing angles of an optical imaging module according to the present utility model.
Fig. 11 is a schematic step diagram of a processing method of an automatic processing system of a robot based on 3D vision according to the present utility model.
Detailed Description
The present utility model will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present utility model more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the utility model.
Referring to fig. 1-11, the present utility model provides an automated processing system for a robot based on 3D vision, the system comprising:
a conveyor belt 1, the conveyor belt 1 being used for carrying and conveying the object 2 to be processed to a designated position;
the 3D vision module 3, the 3D vision module 3 carries out rapid 3D laser scanning on the object 2 to be processed on the conveyor belt 1 to obtain high-precision three-dimensional surface shape data;
the control module 4 is used for intelligently extracting required processing tracks according to the three-dimensional surface shape data, and further sequencing track planning to generate a control instruction capable of driving the robot automatic processing module 5 to move;
the robot automatic processing module 5 is used for processing the object 2 to be processed according to the designated control instruction by the robot automatic processing module 5;
the 3D vision module 3 is installed on the conveyor belt 1, and the 3D vision module 3 is electrically connected with the control module 4, and the robot automatic processing module 5 is electrically connected with the control module 4. According to the utility model, the system carries out rapid 3D laser scanning on the object 2 to be processed on the conveyor belt 1 through the 3D vision module 3 to acquire high-precision three-dimensional surface shape data, so that the high-precision surface shape data and the placing gesture of the object 2 to be processed can be obtained, and the control module 4 controls the robot automatic processing module 5 to carry out self-adaptive track processing operations, such as glue spraying and polishing, according to the size and the placing gesture, so that the system has better adaptability and intelligence while reducing the installation space, and improves the processing quality;
the 3D vision module 3 comprises a shell 31, a support frame 32, an optical imaging module 33, a connecting component 34 and a driving motor 35, wherein a cavity for accommodating the support frame 32, the optical imaging module 33, the connecting component 34 and the driving motor 35 is formed in the shell 31, the shell 31 is arranged on the conveyor belt 1, the optical imaging module 33 is positioned on the upper portion of the cavity in the shell 31, the optical imaging module 33 is fixedly connected with the connecting component 34, the connecting component 34 is movably connected with the support frame 32, and the driving motor 35 is in driving connection with the optical imaging module 33 through the connecting component 34. In the utility model, the system carries out rapid 3D laser scanning on the object 2 to be processed on the conveyor belt 1 through the optical imaging module 33 to acquire high-precision three-dimensional surface shape data, thereby obtaining the high-precision surface shape data and the placing gesture of the object 2 to be processed, and improving the processing quality.
In this embodiment, the connection assembly 34 includes a guide bar 341 and a pulley 342, the guide bar 341 is in driving connection with the pulley 342, the pulley 342 is fixed on the driving motor 35, the optical imaging module 33 is fixedly connected with the guide bar 341, the driving motor 35 drives the guide bar 341 through the pulley 342 to drive the optical imaging module 33 to scan back and forth, so as to form a plurality of contour lines, and complete surface shape information of the surface of the object 2 to be processed is formed. In the utility model, the structure ensures that the system has the characteristics of high measurement precision and non-contact measurement.
In this embodiment, the optical imaging module 33 may be set with four viewing angles, where the optical imaging module 33 includes a left-side line laser 331, a right-side line laser 332, a CCD sensor 333, a plurality of optical lenses, and a fixing component, and the left-side line laser 331, the right-side line laser 332, the CCD sensor 333, and the plurality of optical lenses are all fixedly connected through the fixing component; the left side line laser 331 emits laser light onto the object 2 to be processed, and the laser light is reflected by a plurality of optical lenses to form a left front view direction optical path 3311 and a left rear view direction optical path 3312, and the right side line laser 332 emits laser light onto the object 2 to be processed, and is reflected by a plurality of optical lenses to form a right front view direction optical path 3321 and a right rear view direction optical path 3322. In the utility model, the arrangement enables the processing system to form 4 different visual angles based on 1 CCD sensor 333 in the optical module, and can simultaneously perform 3D measurement on the front and rear ends including the bottom surface and the side surface of the object 2 to be processed, and obtain high-precision three-dimensional data in 4 directions at one time; the optical paths formed by the plurality of optical lenses are folded to be horizontal from the height direction and the width direction, so that the size of the device is reduced under the condition of ensuring the scanning depth of field. The utility model effectively reduces the equipment cost and improves the stability of the equipment;
the fixing component comprises a left laser fixing plate (not shown), a right laser fixing plate 3351, a lens fixing block and a CCD fixing block 3352, wherein the left line laser 331 is fixed on the left laser fixing plate (not shown), the left laser fixing plate (not shown) is fixed on the lens fixing block, the right line laser 332 is fixed on the right laser fixing plate 3351, the right laser fixing plate 3351 is fixed on the lens fixing block, the CCD sensor 333 is fixed on the CCD fixing block 3352, the lens fixing block is provided with a clamping groove 3353 matched with an optical lens, and a plurality of optical lenses are clamped and fixed with the clamping groove 3353. In the utility model, the structure arrangement ensures that the 3D imaging module structure of the processing system is more stable and is convenient to use.
In the present embodiment, the plurality of optical lenses includes a first lens 3341, a first lens 3342, a third lens 3343, a fourth lens 3344, a fifth lens 3345, and a sixth lens 3346, the lens fixing blocks include a first fixing block 3354, a second fixing block 3355, a third fixing block 3356, and a fourth fixing block 3357, the fourth fixing block 3357 is fixedly connected with the third fixing block 3356, and the first fixing block 3354 is disposed opposite to the second fixing block 3355;
the left laser fixing plate (not shown) is fixed on the first fixing block 3354, the right laser fixing plate 3351 is fixed on the second fixing block 3355, one end of the first lens 3341, the first lens 3342 and one end of the third lens 3343 are fixedly connected with the first fixing block 3354, the other ends of the first lens 3341, the first lens 3342 and the third lens 3343 are fixedly connected with the second fixing block 3355, the first lens 3341, the first lens 3342 and the third lens 3343 are positioned between the first fixing block 3354 and the second fixing block 3355, the first lens 3341, the first lens 3342 and the third lens 3343 are sequentially arranged, one end of the fourth lens 3344 is fixedly connected with the first fixing block 3354, the other end of the fourth lens 3344 is fixedly connected with the third fixing block 3356, one end of the fifth lens 3345 is fixedly connected with the third fixing block 3356, the other end of the fifth lens 3345 is fixedly connected with the second fixing block 3355, and the sixth lens 3346 is fixedly connected with the fourth fixing block 3357. According to the utility model, the 3D imaging module can better realize the formed light path folding design, the imaging light path is folded to the horizontal direction from the height direction and the width direction, the equipment size is reduced under the condition of ensuring the scanning depth of field, the equipment cost is effectively reduced, and meanwhile, the stability of the equipment is improved.
In the present embodiment, the left-side line laser 331 projects laser light onto the object 2 to be processed, forms a left front view direction light path 3311, reflects the laser light onto the fourth lens 3344 via the first lens 3341, reflects the laser light onto the sixth lens 3346 via the fourth lens 3344, and reflects the laser light onto the CCD sensor 333 for imaging via the sixth lens 3346.
In the present embodiment, the left-side line laser 331 projects laser light onto the object 2 to be processed, forms a left-rear view direction light path 3312, reflects the laser light onto the first lens 3342 via the third lens 3343, reflects the laser light onto the fourth lens 3344 via the first lens 3342, reflects the laser light onto the sixth lens 3346 via the fourth lens 3344, and reflects the laser light onto the CCD sensor 333 via the sixth lens 3346 for imaging.
In the present embodiment, the right side line laser 332 projects laser light onto the object 2 to be processed, forms a right front view direction optical path 3321, reflects the third laser light onto a fifth lens 3345 via a first lens 3341, and then reflects the third laser light onto a CCD sensor 333 for imaging via the fifth lens 3345.
In the present embodiment, the right side line laser 332 projects laser light onto the object 2 to be processed, forms a right back view direction light path 3322, reflects the light path 3322 to the first lens 3342 via the third lens 3343, reflects the light path 3342 to the fifth lens 3345 via the first lens 3342, and reflects the light path 3322 to the CCD sensor 333 for imaging via the fifth lens 3345.
In this embodiment, the 3D vision module 3 further includes a carrier glass 36, and the carrier glass 36 is fixed on the upper portion of the housing 31. In the present utility model, when the 3D vision module 3 is installed upright, the object 2 to be processed can be placed on the carrier glass 36; when the 3D vision module 3 is mounted upside down, the bearing glass 36 can protect the optical path system, forming a sealing structure.
In this embodiment, the control module 4 may be a computer or other control terminal.
In this embodiment, relevant software is provided in the control module 4, the control module 4 analyzes three-dimensional surface shape data obtained by the 3D vision module 3 through the relevant software, intelligently extracts required processing tracks, further sorts track planning, and generates a control instruction capable of driving the robot to move, so as to control the robot automatic processing module 5 to process the object 2 to be processed according to the designated control instruction. The related software is the existing control software technology, and can realize the functions.
In this embodiment, the object 2 to be processed may be a shoe or other suitable processing object 2, and the processing may be glue spraying or polishing or other suitable processing technology.
In order to achieve the above object, the present utility model further provides a processing method of an automatic processing system of a robot based on 3D vision, comprising the steps of:
step 1, placing an object to be processed on a conveyor belt 1, and conveying the object to be processed 2 to a designated position through the conveyor belt 1;
step 2, carrying out rapid 3D laser scanning on an object 2 to be processed on the conveyor belt 1 through a 3D vision module 3 to obtain high-precision three-dimensional surface shape data;
step 3, the control module 4 intelligently extracts required processing tracks according to the high-precision three-dimensional surface shape data obtained in the step 2, further sorts track planning, generates a control instruction capable of driving the robot automatic processing module 5 to move and then sends the control instruction to the robot automatic processing module 5;
and 4, the robot automatic processing module 5 receives the control instruction sent by the control module 4 in the step 3, and then processes the object 2 to be processed according to the designated control instruction.
In this embodiment, in step 2, the 3D vision module 3 includes a housing 31, a support frame 32, an optical imaging module 33, a connection assembly 34, and a driving motor 35, a cavity for accommodating the support frame 32, the optical imaging module 33, the connection assembly 34, and the driving motor 35 is formed inside the housing 31, the housing 31 is mounted on the conveyor belt, the optical imaging module 33 is located at an upper portion of the cavity inside the housing 31, the optical imaging module 33 is fixedly connected with the connection assembly 34, the connection assembly 34 is movably connected with the support frame 32, and the driving motor 35 is in driving connection with the optical imaging module 33 through the connection assembly 34.
In the utility model, the processing system can form a plurality of different visual angles based on the inside of the optical module, and can simultaneously perform 3D measurement on the front end and the rear end including the bottom surface and the side surface of an object to be processed, and high-precision three-dimensional data in a plurality of directions can be acquired at one time; the optical paths formed by the plurality of optical lenses are folded to be horizontal from the height direction and the width direction, so that the size of the device is reduced under the condition of ensuring the scanning depth of field. The utility model effectively reduces the equipment cost and improves the stability of the equipment.
In this embodiment, the optical imaging module 33 may also be configured with two viewing angles, and includes a housing 101, a line laser 102, a second CCD sensor 103, a fixing bracket 104, a plurality of laser reflection lenses 105, and a plurality of imaging reflection lenses 106, where the line laser 102, the second CCD sensor 103, the fixing bracket 104, the laser reflection lenses 105, and the imaging reflection lenses 106 are all located inside the housing 101, and the laser reflection lenses 105 and the imaging reflection lenses 106 are all fixed on the fixing bracket 104; the line laser 102 and the second CCD sensor 103 are arranged at the left end in the shell 101 in a vertical side-by-side manner, and the laser reflection lens 105 and the imaging reflection lens 106 are distributed at the right end in the shell 101;
the line laser 102 is used as a structured light source, and projects onto the surface of an object to be measured, so that a bright line laser light knife is formed, the direction of a laser incident light path is changed through the laser reflection lens 105, then a reflection light path with front and rear visual angles is formed through the imaging reflection lens 106, and finally the laser light is transmitted into the second CCD sensor 103 for imaging.
In the present utility model, the optical imaging module 33 may be a single four-view arrangement;
in the present utility model, the optical imaging module 33 may be a separate two-view arrangement;
in the present utility model, the optical imaging module 33 may be a two-view setting plus a four-view setting to form a six-view setting;
in the present utility model, the optical imaging module 33 may also be configured to form an eight-view arrangement with 2 two-view arrangements plus four-view arrangements.
The utility model has the beneficial effects that: compared with the prior art, the system carries out rapid 3D laser scanning on the object 2 to be processed on the conveyor belt 1 through the 3D vision module 3, 2 to 8 different visual angles can be formed based on 1 to 3 CCD sensors in the 3D vision module 3,3D measurement can be carried out on the front end and the rear end including the bottom surface and the side surface of the object 2 to be processed at the same time, and a plurality of high-precision three-dimensional data in different directions can be obtained at one time, so that the high-precision surface shape data and the placing gesture of the object 2 to be processed can be obtained, the control module 4 controls the robot automatic processing module 5 to carry out self-adaptive track processing operation, and the system has better adaptability and intelligence while reducing the installation space and improving the processing quality; the optical path formed by a plurality of optical lenses of the system adopts a folding design, the imaging optical path is folded to the horizontal direction from the height direction and the width direction, and the equipment size is reduced under the condition of ensuring the scanning depth of field. The utility model effectively reduces the equipment cost and improves the stability of the equipment.
The foregoing description of the preferred embodiments of the utility model is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the utility model.

Claims (6)

1. An automatic processing system of robot based on 3D vision, characterized by that this system includes:
the conveying belt is used for bearing and conveying the object to be processed to a designated position;
the 3D vision module is used for carrying out rapid 3D laser scanning on the object to be processed on the conveyor belt to obtain high-precision three-dimensional surface shape data;
the control module intelligently extracts required processing tracks according to the three-dimensional surface shape data, and further orders track planning to generate control instructions capable of driving the robot automatic processing module to move;
the robot automatic processing module processes an object to be processed according to a specified control instruction;
the 3D vision module is arranged on the conveyor belt, the 3D vision module is electrically connected with the control module, and the robot automatic processing module is electrically connected with the control module;
the 3D vision module comprises a shell, a support frame, an optical imaging module, a connecting assembly and a driving motor, wherein a cavity for accommodating the support frame, the optical imaging module, the connecting assembly and the driving motor is formed in the shell, the shell is arranged on the conveyor belt, the optical imaging module is positioned at the upper part of the cavity in the shell, the optical imaging module is fixedly connected with the connecting assembly, the connecting assembly is movably connected with the support frame, and the driving motor is in driving connection with the optical imaging module through the connecting assembly;
the connecting assembly comprises a guide rod and a pulley, the guide rod is in driving connection with the pulley, the pulley is fixed on the driving motor, the optical imaging module is fixedly connected with the guide rod, and the driving motor drives the guide rod through the pulley so as to drive the optical imaging module to scan back and forth, so that a plurality of contour lines are formed, and complete surface shape information of the surface of an object is formed;
the optical imaging module comprises a left-side line laser, a right-side line laser, a CCD sensor, a plurality of optical lenses and a fixing component, wherein the left-side line laser, the right-side line laser, the CCD sensor and the plurality of optical lenses are fixedly connected through the fixing component; the left side line laser emits laser to the object to form a left front view angle direction light path and a left back view angle direction light path through the reflection of a plurality of optical lenses, and the right side line laser emits laser to the object to form a right front view angle direction light path and a right back view angle direction light path through the reflection of a plurality of optical lenses;
the fixing assembly comprises a left laser fixing plate, a right laser fixing plate, a lens fixing block and a CCD fixing block, wherein the left side line laser is fixed on the left laser fixing plate, the left laser fixing plate is fixed on the lens fixing block, the right side line laser is fixed on the right laser fixing plate, the right laser fixing plate is fixed on the lens fixing block, the CCD sensor is fixed on the CCD fixing block, the lens fixing block is provided with a clamping groove matched with the optical lenses, and the optical lenses are clamped and fixed with the clamping groove;
the plurality of optical lenses comprise a first lens, a second lens, a third lens, a fourth lens, a fifth lens and a sixth lens, the lens fixing blocks comprise a first fixing block, a second fixing block, a third fixing block and a fourth fixing block, the fourth fixing block is fixedly connected with the third fixing block, and the first fixing block and the second fixing block are oppositely arranged;
the left laser fixing plate is fixed on the first fixing block, the right laser fixing plate is fixed on the second fixing block, one ends of the first lens, the second lens and the third lens are fixedly connected with the first fixing block, the other ends of the first lens, the second lens and the third lens are fixedly connected with the second fixing block, the first lens, the second lens and the third lens are positioned between the first fixing block and the second fixing block, the first lens, the second lens and the third lens are sequentially arranged, one end of the fourth lens is fixedly connected with the first fixing block, the other end of the fourth lens is fixedly connected with the third fixing block, one end of the fifth lens is fixedly connected with the third fixing block, the other end of the fifth lens is fixedly connected with the second fixing block, and the sixth lens is fixedly arranged on the fourth fixing block.
2. The automated processing system of the 3D vision-based robot of claim 1, wherein the left side line laser projects laser light onto an object to be measured to form a first laser line, the first laser line is reflected to the fourth lens via the first lens, is reflected to the sixth lens via the fourth lens, and is reflected to the CCD sensor via the sixth lens to form a left front view direction optical path.
3. The automated processing system of the 3D vision-based robot of claim 1, wherein the left side line laser projects laser light onto an object to be measured to form a second laser line, the second laser line is reflected to the second lens via the third lens, is reflected to the fourth lens via the second lens, is reflected to the sixth lens via the fourth lens, is reflected to the sixth lens, and enters the CCD sensor for imaging via the sixth lens to form a left rear view direction optical path.
4. The automated processing system of a 3D vision-based robot of claim 1, wherein the right side line laser projects laser light onto an object to be measured to form a third laser line, the third laser line is reflected to the fifth lens via the first lens and then reflected to the CCD sensor via the fifth lens to be imaged, forming a right front view direction optical path.
5. The automated processing system of the 3D vision-based robot of claim 1, wherein the right side line laser projects laser light onto an object to be measured to form a fourth laser line, the fourth laser line is reflected to the second lens via the third lens, is reflected to the fifth lens via the second lens, is reflected to the CCD sensor via the fifth lens, and is imaged to form a right back view direction optical path.
6. A processing method of the 3D vision-based robot automatic processing system according to any one of claims 1 to 5, characterized by comprising the steps of:
step 1, placing an object to be processed on a conveyor belt, and conveying the object to be processed to a designated position through the conveyor belt;
step 2, carrying out rapid 3D laser scanning on an object to be processed on the conveyor belt through a 3D vision module to obtain high-precision three-dimensional surface shape data;
step 3, the control module intelligently extracts required processing tracks according to the high-precision three-dimensional surface shape data obtained in the step 2, further sorts track planning, generates a control instruction capable of driving the robot automatic processing module to move and then sends the control instruction to the robot automatic processing module;
and 4, the robot automatic processing module receives the control instruction sent by the control module in the step 3, and then processes the object to be processed according to the designated control instruction.
CN201810284246.4A 2018-04-02 2018-04-02 Automatic machining system and machining method of robot based on 3D vision Active CN108326879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810284246.4A CN108326879B (en) 2018-04-02 2018-04-02 Automatic machining system and machining method of robot based on 3D vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810284246.4A CN108326879B (en) 2018-04-02 2018-04-02 Automatic machining system and machining method of robot based on 3D vision

Publications (2)

Publication Number Publication Date
CN108326879A CN108326879A (en) 2018-07-27
CN108326879B true CN108326879B (en) 2024-02-06

Family

ID=62931785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810284246.4A Active CN108326879B (en) 2018-04-02 2018-04-02 Automatic machining system and machining method of robot based on 3D vision

Country Status (1)

Country Link
CN (1) CN108326879B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109380815A (en) * 2018-12-18 2019-02-26 友上智能科技(苏州)有限公司 The on-line automatic flush coater of sport footwear and spraying process
CN111109766B (en) * 2019-12-16 2021-10-22 广东天机工业智能系统有限公司 Shoe upper grinding device
CN111185805B (en) * 2019-12-20 2022-04-22 上海航天设备制造总厂有限公司 Automatic polishing method for box body with complex structure
CN112189948A (en) * 2020-09-09 2021-01-08 泛擎科技(深圳)有限公司 Rapid identification glue spraying method and system for vamps and soles
CN112620989A (en) * 2020-11-11 2021-04-09 郑智宏 Automatic welding method based on three-dimensional visual guidance
CN112894807A (en) * 2021-01-13 2021-06-04 深圳市玄羽科技有限公司 Industrial automation control device and method
CN114619464A (en) * 2022-03-28 2022-06-14 深慧视(深圳)科技有限公司 Quick self-adaptation robot processing apparatus based on machine vision

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2519075A1 (en) * 2003-03-24 2004-10-07 D3D, L.P. Laser digitizer system for dental applications
CN101520319A (en) * 2008-02-27 2009-09-02 邹小平 Composite three-dimensional laser measurement system and measurement method
CN104161353A (en) * 2014-07-31 2014-11-26 黑金刚(福建)自动化科技股份公司 Automatic glue spraying device and method for vamp
CN105212436A (en) * 2014-06-12 2016-01-06 深圳市精易迅科技有限公司 The measuring system of non-contact 3-D laser foot type and measuring method
CN106003093A (en) * 2016-07-15 2016-10-12 上海瑞尔实业有限公司 Intelligent and automatic 3D-scanning visual adhesive dispensing system and method
CN106238969A (en) * 2016-02-23 2016-12-21 南京中建化工设备制造有限公司 Non-standard automatic welding system of processing based on structure light vision
CN107538508A (en) * 2017-02-16 2018-01-05 北京卫星环境工程研究所 The robot automatic assembly method and system of view-based access control model positioning
CN208497017U (en) * 2018-04-02 2019-02-15 深圳市易泰三维科技有限公司 A kind of automatic processing system of the robot based on 3D vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2519075A1 (en) * 2003-03-24 2004-10-07 D3D, L.P. Laser digitizer system for dental applications
CN101520319A (en) * 2008-02-27 2009-09-02 邹小平 Composite three-dimensional laser measurement system and measurement method
CN105212436A (en) * 2014-06-12 2016-01-06 深圳市精易迅科技有限公司 The measuring system of non-contact 3-D laser foot type and measuring method
CN104161353A (en) * 2014-07-31 2014-11-26 黑金刚(福建)自动化科技股份公司 Automatic glue spraying device and method for vamp
CN106238969A (en) * 2016-02-23 2016-12-21 南京中建化工设备制造有限公司 Non-standard automatic welding system of processing based on structure light vision
CN106003093A (en) * 2016-07-15 2016-10-12 上海瑞尔实业有限公司 Intelligent and automatic 3D-scanning visual adhesive dispensing system and method
CN107538508A (en) * 2017-02-16 2018-01-05 北京卫星环境工程研究所 The robot automatic assembly method and system of view-based access control model positioning
CN208497017U (en) * 2018-04-02 2019-02-15 深圳市易泰三维科技有限公司 A kind of automatic processing system of the robot based on 3D vision

Also Published As

Publication number Publication date
CN108326879A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108326879B (en) Automatic machining system and machining method of robot based on 3D vision
CN109671123A (en) A kind of sole glue spraying equipment and method based on monocular vision
CN101839700A (en) Non-contact image measuring system
US20190193268A1 (en) Robotic arm processing system and method, and non-transitory computer-readable storage medium therefor
KR20010033900A (en) Electronics assembly apparatus with stereo vision linescan sensor
CN106091926A (en) The detection apparatus and method of the miniature workpiece inside groove size of the asynchronous exposure of multi-point source
TW201726019A (en) Method and system for positioning shoe parts in an automated manner during a shoe-manufacturing process
CN105651177A (en) Measuring system suitable for measuring complex structure
CN103499870A (en) Automatic focusing equipment of high-pixel module
CN110524697B (en) Automatic glaze spraying system for toilet bowl blank and positioning method thereof
CN111067197A (en) Robot sole dynamic gluing system and method based on 3D scanning
CN105834120A (en) Fully automatic ABS gear ring defect detection system based on machine vision
CN105269403A (en) Detecting system and detecting method
CN203385925U (en) Automatic focusing device of high-pixel module group
CN110281152A (en) A kind of robot constant force polishing paths planning method and system based on online examination touching
CN205580380U (en) Measurement system suitable for measure complex construction
CN212180655U (en) Cell-phone glass apron arc limit defect detecting device
CN105651179B (en) A kind of light filling is adjustable and the visual imaging measuring system of automated exchanged cutter
CN208497017U (en) A kind of automatic processing system of the robot based on 3D vision
CN111633549A (en) Intelligent dual-robot detection grinding and polishing system for heterogeneous pieces and machining method
CN108592819A (en) A kind of plain bending sheet metal component section flexure contour detecting device and method
CN105783714B (en) A kind of light filling is adjustable and the measuring system of measurable side or inside
CN205607326U (en) Light filling is adjustable and automatically clamped's vision imaging measurement system
CN109190500B (en) Double-pin full-data scanning method and device
CN115963113A (en) Workpiece glue tank gluing detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant