WO2021237351A1 - Fixture with vision system - Google Patents

Fixture with vision system Download PDF

Info

Publication number
WO2021237351A1
WO2021237351A1 PCT/CA2021/050710 CA2021050710W WO2021237351A1 WO 2021237351 A1 WO2021237351 A1 WO 2021237351A1 CA 2021050710 W CA2021050710 W CA 2021050710W WO 2021237351 A1 WO2021237351 A1 WO 2021237351A1
Authority
WO
WIPO (PCT)
Prior art keywords
fixture assembly
parts
imaging device
profile data
location
Prior art date
Application number
PCT/CA2021/050710
Other languages
French (fr)
Inventor
Eric Denijs
Original Assignee
Magna International Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magna International Inc. filed Critical Magna International Inc.
Publication of WO2021237351A1 publication Critical patent/WO2021237351A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/005Manipulators for mechanical processing tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B5/00Measuring arrangements characterised by the use of mechanical techniques
    • G01B5/0002Arrangements for supporting, fixing or guiding the measuring instrument or the object to be measured
    • G01B5/0004Supports
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the present invention relates to a fixture assembly with a vision system for capturing the shape, presence, location, and orientation of two parts.
  • the two parts being joined to form a larger component.
  • Production cycles can include large, multi-step operations, wherein a component is assembled out of several smaller parts.
  • Production cycles often begin by forming the smaller parts with one of a large number of complex and expensive forming assemblies, such as stamping, extruding, or casting assemblies. While forming assembly technology has advanced enough that individual parts can be formed with great precision, connecting formed parts to one another with accuracy and uniformity can be difficult and often times components that have been assembled in the same production cycle have variances. However, as industry standards continue to increase stricter and sticker tolerances are required.
  • To improve uniformity between components many manufactures use fixture assemblies for locating the various formed parts before they are connected together.
  • fixture assemblies provide a template with clamps and other holding devices so that when each formed part is placed in a respective holding device, they form an accurate representation of the component and can then be connected to one another.
  • Fixture assemblies also typically include a series of integrated sensors that are used to detect the location of the formed part. While the use of sensors results in more accurate component construction, the sensors also requires a significant amount of complicated wiring, which adds large upfront capital and also negatively impacts productivity as it takes a large amount of time to integrate. In addition, when sensors are integrated into fixture assemblies, they are prone to damage and displacement.
  • the subject invention provides a fixture assembly.
  • the fixture assembly comprises at least one holding device for holding at least two parts at a location and orientation to form an interface surface therebetween. At least one imaging device is spaced from the at least one holding device for capturing at least one of a shape, the location, and the orientation of the at least two parts.
  • the fixture assembly includes a processor and a memory device.
  • the memory device has a component profile data that includes a shape, location, and orientation of a component to be formed from the at least two parts.
  • the memory device further contains instructions that, when executed by the processor, cause the processor to: receive the at least one capture from the at least one imaging device; compare the at least one capture from the at least one imaging device with the component profile data and generate a signal when at least one of the shape, location, and orientation of the at least two parts matches the component profile data.
  • Figure 1 is a schematic view of a fixture assembly with a vision system
  • Figure 2 is a schematic view of a imaging device for detecting the shape, presence, location, and orientation of at least two parts that are to be connected to one another;
  • Figure 3 is a is a schematic view of a vision system circuit;
  • Figure 4A is a method flow chart illustrating steps the of assembling a component out of two or more parts.
  • Figure 4B is a continuation of the method flow chart in Figure 4A.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • the subject embodiments are directed to a fixture assembly with a vision system.
  • the example embodiments are only provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well- known device structures, and well-known technologies are not described in detail.
  • fixture assembly with a vision system e.g., fixture assembly 20 is intended to provide a template for accurately arranging two or more parts before they are connected.
  • the fixture assembly 20 includes a fixture holding tool 22 and a vision system 24 that includes at least one imaging device 26.
  • the at least one imaging device 26 is spaced from the fixture holding tool 22 and may employ one of numerous techniques to detect the shape, presence, location, and orientation of the fixture holding tool 22 and at least two parts 28 so that the at least two parts 28 can be connected together to form a component 30.
  • the fixture assembly 20 includes a series of holding devices 32, such as clamps, slides, cylinders, fasteners, nuts, other holding mechanisms, or a combination thereof, to hold the at least two parts 28 in a location and orientation where they form an interface surface 31 that can be connected to one another.
  • the vision system 24 further includes at least one controller 34 in operable communication with the imaging device 26 for receiving image data from the at least one imaging device 26 and comparing it to at least one predetermined parameter.
  • the vision system 24 further includes a user interface 36 in operable communication with the controller 34 for additional functionality, for example, changing the at least one predetermined parameter.
  • the vision system 24 may further include a series of target units 38, connected to various locations on the fixture assembly 20 and/or parts 28.
  • the target units 38 provide a frame of reference for the parts 28 as they are located via the holding devices 32.
  • target units 38 may be connected to the holding devices 32 (e.g., clamps) to monitor if the holding device 32 is in an open, partially tightened, or tightened position.
  • the fixture assembly 20 further includes a connection assembly 40 for connecting the at least two parts 28, once the at least two parts 28 are located in the predetermined parameter.
  • the connection assembly 40 may include a robotic arm 42 carrying a connecting unit 44, such as a welding or riveting tool.
  • connection assembly 40 may be in operable communication with the controller 34 for automatically connecting the at least two parts 28 once they are located in the predetermined parameters. It should be appreciated the connection assembly 40 could alternatively include other tools and mechanisms that connect the two or more parts 28 and may also be manually operated instead of being attached to a robot arm 42.
  • the target units 38 may provide a scale reference, so that the connection assembly 40 can weld along a predetermined distance or the rivet tool can place rivets within a certain distance of one another.
  • at least two target units 38 may be located on one of the parts 28, each adjoining part 28, the fixture assembly 22, or a combination thereof such that the spacing and orientation between the target units 38 provide orientation and distance information.
  • at least one target unit 38 or a plurality of target units 38 may be placed on or around the interface 31 or one or two adjoining parts 28.
  • the target units 38 may include a specific color and/or shape that the imaging device is configured to recognize.
  • the target units 38 may include surface markings (e.g., paint) or removable bodies (e.g., magnetic buttons).
  • the target units 38 include location aware electronics, such as RFID technology.
  • the imaging device 26 utilizes light detection and ranging (LiDAR) functionality and includes a laser source 46 that projects pulses of light onto a fixed mirror 48, the fixed mirror 48 then reflects the pulses of light to a rotating mirror 50, and the rotating mirror 50 then reflects the pulses of light to the part 28 and/or target units 38.
  • LiDAR light detection and ranging
  • the pulses of light that contact the part 28 and/or target units 38 are then reflected back between mirrors 48 and 50 towards a laser reader 52 located near the laser source 46.
  • the time that it takes the pulses of light to leave the laser source 46 and return to laser reader 52 thus provides an accurate representation of the part 28 and/or target unit 38 presence, shape, location, and orientation.
  • the pulses of light may be ultraviolet (UV), infrared (IR) or near IR, or other wavelengths.
  • the imaging device 26 may employ other technologies such as depth cameras, 3D imaging cameras, RFID tracking, etc.
  • the imaging device 26 may include SICK 3D Vision sensors, Zivid One Plus, Intel® RealSenseTM Depth Camera D435i, or other instrumentations.
  • the imaging device 26 includes one or more technologies that simultaneously develop a shape and orientation of the part 28 and a location and orientation of the target units 38, wherein readings can be compared for accuracy confirmation.
  • the vision system circuit 200 includes a CPU circuit 202 associated with controller 38, an imaging system 204 associated with the imaging device 26, a user interface system 205 associated with user interface 36, and a connecting operations circuit 206 associated with the connection assembly 40.
  • the CPU circuit 202 includes the controller 38 that includes a processor 210, a communications unit 212 (for example associated with wired 220 or wireless 222 internet, Bluetooth, or other short and long range connections), and a memory 214 having machine- readable non-transitory storage.
  • the memory 214 may include instructions that, when executed by the processor 210, cause the processor 320 to, at least, perform the methods described herein.
  • Programs and/or software 216 are saved on the memory 214 and so is data 218 obtained (e.g., captures) via the imaging system 204 and the user interface system 205 (operation selections).
  • the processor 210 carries out instructions based on the software 216 and data 218, for example, providing instructions to the connecting operations circuit 206 to perform one of the welding, riveting, and/or fastening operations to the parts 28. Communications between the CPU circuit 202, the imaging system 204, the user interface system 205, and the connecting operations circuit 206 are communicated to and from the communications unit 212 (wired 220 or wireless 222), allowing one or both of transmittal and receipt of information. As such, software 216 and data 218 may be updated via instructions from the user interface system 205, which may be in communication to a central server, a cloud server, or a combination thereof.
  • the imaging system 204 includes imaging devices 26A-26N, A equaling one and
  • the imaging devices 26A-26N communicate captures of part 28 to the CPU circuit 202, which, in response can extrapolate the captures into a shape, presence, location, and orientation of the part and then compares the extrapolation to predetermined parameters (e.g., a 3D computer rendition of the component 30). Once the predetermined parameters are met, the CPU circuit 202 then communicates to the connecting operations circuit 206 to begin connecting the parts 28.
  • the CPU circuit 202 may further include an alarm 224 for providing a visual or auditory notice to an operator once the parts 28 match the predetermined parameters. As such, certain safety protocols may be stored within the memory 214 to prevent any operations until the parts 28 match the predetermined parameters.
  • the predetermined parameters e.g., a 3D computer rendition of the component
  • a component profile data 226 (e.g., a 3D computer rendition of the component 30) may be saved in memory 214.
  • the component profile data 226 may include several profiles 226 related to specific components, such as a variety of automobile components. Each component profile data 226 may include the number of parts 28 needed to form the component 30 as well as the shape, location, and orientation of each part 28.
  • the component profile data 226 may further include interface surface locations and connection instructions for the connecting operations circuit 206, such as location information for welding, riveting, or other fastening/connecting means.
  • Target location data 228 associated with target units 38 may also be saved in the memory 214.
  • Target location data 228 may be initially gathered by communications from the imaging devices 26 to the CPU circuit 202.
  • Memory 214 may also include connecting operations data 230 that are associated with the component profile data 226, such that when a component profile data 226 is selected only connecting operations data 230 that can implemented on the component 30 associated with the component profile data 226 can also be selected.
  • the connecting operations data 230 provides that a certain type of connection technique (e.g., rivets) is not appropriate for a certain component, the CPU circuit 202 may generate a warning, prevent selection of the inappropriate connection technique, or require a bypass password.
  • Target location data 228 may be used to modify the scale of the component profile data 226 and the connecting operations data 230.
  • CPU circuit 202 may be configured to periodically check the target location data 228 to ensure a uniform orientation between cycles via detections from the imaging system 204. As such, if one of the target units 38 is moved with respect to the other target units 38, the alarm 224 may provide a visual or auditory notice to an operator and the CPU circuit 202 may generate a safety protocol to prevent any further operations until the displaced target unit 38 is realigned.
  • the target location data 228, the component profile data 226, the image capturing data 218, or a combination thereof are compared before the parts 28 are joined.
  • the subject invention further includes a method 300 including several steps of assembling a component out of two or more parts with a fixture assembly.
  • the method 300 includes providing 302 a fixture assembly having at least one holding device for holding at least two parts in a specific location and orientation to form an interface surface therebetween.
  • the method may further includes placing 304 target units at various locations on the fixture assembly to form a frame of reference.
  • a component profile data is then selected 306 that corresponds to the component that is to be assembled. Once the component profile data is selected 306, a connecting operation data can be selected 308 based at least in part by which component profile data was selected.
  • the method 300 further includes placing 310 at least two parts into the fixture assembly. Step 310 may further include placing 312 the at least two parts in holding devices on the fixture assembly.
  • Step 310 may further include partially tightening 314 the holding devices.
  • Step 310 may further include placing target units on at least one of the parts.
  • the method 300 further include adjusting 316 the parts until they fit a component profile.
  • Step 316 may further include capturing 318 an image of the parts, via an imaging device, and comparing 320 the captured image to the component profile.
  • Step 320 may include capturing 322 the shape, presence, location, and orientation of the at least two parts. Therefore, if a part with a non- conforming shape is present, a notification may be generated.
  • Step 320 may further include using the target units as a scale 324 for a frame of reference, scale, or orientation.
  • the method 300 may also include generating a signal 326 to an operator that the parts match the component profile (e.g., size, location, and orientation). After the parts match the component profile, the holding devices may be completely tightened 328 and the parts can be re-compared 330 to the component profile data and the target unit data to insure no displacement during the tightening of the holding devices. After the parts are held in conformance with the component profile, a connecting operation 332 is performed. Based on the reference to the connections operations data at step 308, the method 300 further include controlling 336 a robotic arm to connect two parts.
  • the component profile e.g., size, location, and orientation
  • Step 336 may include welding, riveting, or other fastening operations 338 and may further include using the target unit as a reference to scale 340 the size of a connection operation between parts.
  • the method 300 may further include using the imaging device to check 342 the quality of the connection between the at least two parts and may rely on the connection operations data when assessing the quality (e.g., weldment size and location, rivet location, etc.).
  • the quality e.g., weldment size and location, rivet location, etc.
  • the hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors, or any other suitable circuit.
  • IP intellectual property
  • ASIC application-specific integrated circuits
  • programmable logic arrays optical processors
  • programmable logic controllers microcode, microcontrollers
  • servers microprocessors, digital signal processors, or any other suitable circuit.
  • signal processors digital signal processors
  • systems described herein can be implemented using a general-purpose computer or general-purpose processor with a computer program that, when executed, carries out any of the respective methods, algorithms, and/or instructions described herein.
  • a special purpose computer/processor can be utilized which can contain other hardware for carrying out any of the methods, algorithms, or instructions described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A fixture assembly for holding at least two parts at a location and orientation to form an interface surface therebetween such that they can be connected to form a larger component. The fixture assembly includes an imaging device for capturing at least one of the shape, presence, location, and orientation of the at least two parts. The fixture assembly further includes a processor and a memory device. The memory device includes a component profile data and receives captures from the at least one imaging device. The processor is configured to compare the captures from the imaging device with the component profile data and indicate when at least one of the shape, presence, location, and orientation of the at least two parts matches the component profile data. Once the parts match the component profile, the interface surfaces can be connected.

Description

FIXTURE WITH VISION SYSTEM
CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This PCT International Patent Application claims the benefit of and priority to
U.S. Provisional Patent Application Serial No. 63/030,191 filed on May 26, 2020, titled “Fixture With Vision System,” the entire disclosure of which is hereby incorporated by reference.
BACKGROUND OF THE INVENTION 1. Field of the Invention
[0002] The present invention relates to a fixture assembly with a vision system for capturing the shape, presence, location, and orientation of two parts. The two parts being joined to form a larger component.
2. Related Art
[0003] This section provides background information related to the present disclosure which is not necessarily prior art.
[0004] Productivity and efficiency are the goals in any production cycle. For some industries, like the automobile industry, production cycles can include large, multi-step operations, wherein a component is assembled out of several smaller parts. Production cycles often begin by forming the smaller parts with one of a large number of complex and expensive forming assemblies, such as stamping, extruding, or casting assemblies. While forming assembly technology has advanced enough that individual parts can be formed with great precision, connecting formed parts to one another with accuracy and uniformity can be difficult and often times components that have been assembled in the same production cycle have variances. However, as industry standards continue to increase stricter and sticker tolerances are required. [0005] To improve uniformity between components, many manufactures use fixture assemblies for locating the various formed parts before they are connected together. More particularly, fixture assemblies provide a template with clamps and other holding devices so that when each formed part is placed in a respective holding device, they form an accurate representation of the component and can then be connected to one another. Fixture assemblies also typically include a series of integrated sensors that are used to detect the location of the formed part. While the use of sensors results in more accurate component construction, the sensors also requires a significant amount of complicated wiring, which adds large upfront capital and also negatively impacts productivity as it takes a large amount of time to integrate. In addition, when sensors are integrated into fixture assemblies, they are prone to damage and displacement.
[0006] Accordingly, there is a continuing desire to develop fixture assemblies to maintain accurate and efficient production cycles.
SUMMARY OF THE INVENTION
[0007] This section provides a general summary of the disclosure and is not to be interpreted as a complete and comprehensive listing of all of the objects, aspects, features and advantages associated with the present disclosure.
[0008] The subject invention provides a fixture assembly. The fixture assembly comprises at least one holding device for holding at least two parts at a location and orientation to form an interface surface therebetween. At least one imaging device is spaced from the at least one holding device for capturing at least one of a shape, the location, and the orientation of the at least two parts. The fixture assembly includes a processor and a memory device. The memory device has a component profile data that includes a shape, location, and orientation of a component to be formed from the at least two parts. The memory device further contains instructions that, when executed by the processor, cause the processor to: receive the at least one capture from the at least one imaging device; compare the at least one capture from the at least one imaging device with the component profile data and generate a signal when at least one of the shape, location, and orientation of the at least two parts matches the component profile data. [0009] Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS [0010] The drawings described herein are for illustrative purposes only of selected embodiments and are not intended to limit the scope of the present disclosure. The inventive concepts associated with the present disclosure will be more readily understood by reference to the following description in combination with the accompanying drawings wherein:
[0011] Figure 1 is a schematic view of a fixture assembly with a vision system;
[0012] Figure 2 is a schematic view of a imaging device for detecting the shape, presence, location, and orientation of at least two parts that are to be connected to one another; [0013] Figure 3 is a is a schematic view of a vision system circuit;
[0014] Figure 4A is a method flow chart illustrating steps the of assembling a component out of two or more parts; and
[0015] Figure 4B is a continuation of the method flow chart in Figure 4A.
DESCRIPTION OF THE ENABLING EMBODIMENT [0016] Example embodiments will now be described more fully with reference to the accompanying drawings. In general, the subject embodiments are directed to a fixture assembly with a vision system. However, the example embodiments are only provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well- known device structures, and well-known technologies are not described in detail.
[0017] Referring to the Figures, wherein like numerals indicate corresponding parts throughout the views, the fixture assembly with a vision system, e.g., fixture assembly 20 is intended to provide a template for accurately arranging two or more parts before they are connected.
[0018] Referring initially to Figure 1, the fixture assembly 20 is schematically illustrated.
The fixture assembly 20 includes a fixture holding tool 22 and a vision system 24 that includes at least one imaging device 26. The at least one imaging device 26 is spaced from the fixture holding tool 22 and may employ one of numerous techniques to detect the shape, presence, location, and orientation of the fixture holding tool 22 and at least two parts 28 so that the at least two parts 28 can be connected together to form a component 30. The fixture assembly 20 includes a series of holding devices 32, such as clamps, slides, cylinders, fasteners, nuts, other holding mechanisms, or a combination thereof, to hold the at least two parts 28 in a location and orientation where they form an interface surface 31 that can be connected to one another. The vision system 24 further includes at least one controller 34 in operable communication with the imaging device 26 for receiving image data from the at least one imaging device 26 and comparing it to at least one predetermined parameter. The vision system 24 further includes a user interface 36 in operable communication with the controller 34 for additional functionality, for example, changing the at least one predetermined parameter.
[0019] With continued reference to Figure 1, the vision system 24 may further include a series of target units 38, connected to various locations on the fixture assembly 20 and/or parts 28. The target units 38 provide a frame of reference for the parts 28 as they are located via the holding devices 32. For example, target units 38 may be connected to the holding devices 32 (e.g., clamps) to monitor if the holding device 32 is in an open, partially tightened, or tightened position. The fixture assembly 20 further includes a connection assembly 40 for connecting the at least two parts 28, once the at least two parts 28 are located in the predetermined parameter. The connection assembly 40 may include a robotic arm 42 carrying a connecting unit 44, such as a welding or riveting tool. The connection assembly 40 may be in operable communication with the controller 34 for automatically connecting the at least two parts 28 once they are located in the predetermined parameters. It should be appreciated the connection assembly 40 could alternatively include other tools and mechanisms that connect the two or more parts 28 and may also be manually operated instead of being attached to a robot arm 42.
[0020] In addition to helping locate parts 28, the target units 38 may provide a scale reference, so that the connection assembly 40 can weld along a predetermined distance or the rivet tool can place rivets within a certain distance of one another. For example, at least two target units 38 may be located on one of the parts 28, each adjoining part 28, the fixture assembly 22, or a combination thereof such that the spacing and orientation between the target units 38 provide orientation and distance information. In some embodiments, at least one target unit 38 or a plurality of target units 38 may be placed on or around the interface 31 or one or two adjoining parts 28. The target units 38 may include a specific color and/or shape that the imaging device is configured to recognize. Therefore, in some embodiments, the target units 38 may include surface markings (e.g., paint) or removable bodies (e.g., magnetic buttons). In some embodiments, the target units 38 include location aware electronics, such as RFID technology. [0021] With reference now to Figure 2, an example imaging device 26 is illustrated. In some embodiments, the imaging device 26 utilizes light detection and ranging (LiDAR) functionality and includes a laser source 46 that projects pulses of light onto a fixed mirror 48, the fixed mirror 48 then reflects the pulses of light to a rotating mirror 50, and the rotating mirror 50 then reflects the pulses of light to the part 28 and/or target units 38. The pulses of light that contact the part 28 and/or target units 38 are then reflected back between mirrors 48 and 50 towards a laser reader 52 located near the laser source 46. The time that it takes the pulses of light to leave the laser source 46 and return to laser reader 52 thus provides an accurate representation of the part 28 and/or target unit 38 presence, shape, location, and orientation. The pulses of light may be ultraviolet (UV), infrared (IR) or near IR, or other wavelengths. In addition, the imaging device 26 may employ other technologies such as depth cameras, 3D imaging cameras, RFID tracking, etc. For example, the imaging device 26 may include SICK 3D Vision sensors, Zivid One Plus, Intel® RealSense™ Depth Camera D435i, or other instrumentations. In some embodiments, the imaging device 26 includes one or more technologies that simultaneously develop a shape and orientation of the part 28 and a location and orientation of the target units 38, wherein readings can be compared for accuracy confirmation.
[0022] With reference now to Figure 3, a vision system circuit 200 is schematically illustrated. Elements of the vision system circuit 200 may be in a local or remote location. The various elements provided therein allow for a specific implementation. Thus, one of ordinary skill in the art of electronics and circuits may substitute various components to achieve a similar functionality. The vision system circuit 200 includes a CPU circuit 202 associated with controller 38, an imaging system 204 associated with the imaging device 26, a user interface system 205 associated with user interface 36, and a connecting operations circuit 206 associated with the connection assembly 40.
[0023] The CPU circuit 202 includes the controller 38 that includes a processor 210, a communications unit 212 (for example associated with wired 220 or wireless 222 internet, Bluetooth, or other short and long range connections), and a memory 214 having machine- readable non-transitory storage. The memory 214 may include instructions that, when executed by the processor 210, cause the processor 320 to, at least, perform the methods described herein. Programs and/or software 216 are saved on the memory 214 and so is data 218 obtained (e.g., captures) via the imaging system 204 and the user interface system 205 (operation selections). The processor 210 carries out instructions based on the software 216 and data 218, for example, providing instructions to the connecting operations circuit 206 to perform one of the welding, riveting, and/or fastening operations to the parts 28. Communications between the CPU circuit 202, the imaging system 204, the user interface system 205, and the connecting operations circuit 206 are communicated to and from the communications unit 212 (wired 220 or wireless 222), allowing one or both of transmittal and receipt of information. As such, software 216 and data 218 may be updated via instructions from the user interface system 205, which may be in communication to a central server, a cloud server, or a combination thereof.
[0024] The imaging system 204 includes imaging devices 26A-26N, A equaling one and
N being all natural numbers. The imaging devices 26A-26N communicate captures of part 28 to the CPU circuit 202, which, in response can extrapolate the captures into a shape, presence, location, and orientation of the part and then compares the extrapolation to predetermined parameters (e.g., a 3D computer rendition of the component 30). Once the predetermined parameters are met, the CPU circuit 202 then communicates to the connecting operations circuit 206 to begin connecting the parts 28. The CPU circuit 202 may further include an alarm 224 for providing a visual or auditory notice to an operator once the parts 28 match the predetermined parameters. As such, certain safety protocols may be stored within the memory 214 to prevent any operations until the parts 28 match the predetermined parameters.
[0025] The predetermined parameters (e.g., a 3D computer rendition of the component
30) and their association with certain types of parts 28 or components 30 can be saved in the memory 214. For example, a component profile data 226 (e.g., a 3D computer rendition of the component 30) may be saved in memory 214. The component profile data 226 may include several profiles 226 related to specific components, such as a variety of automobile components. Each component profile data 226 may include the number of parts 28 needed to form the component 30 as well as the shape, location, and orientation of each part 28. The component profile data 226 may further include interface surface locations and connection instructions for the connecting operations circuit 206, such as location information for welding, riveting, or other fastening/connecting means.
[0026] Target location data 228 associated with target units 38 may also be saved in the memory 214. Target location data 228 may be initially gathered by communications from the imaging devices 26 to the CPU circuit 202. Memory 214 may also include connecting operations data 230 that are associated with the component profile data 226, such that when a component profile data 226 is selected only connecting operations data 230 that can implemented on the component 30 associated with the component profile data 226 can also be selected. In other words, if the connecting operations data 230 provides that a certain type of connection technique (e.g., rivets) is not appropriate for a certain component, the CPU circuit 202 may generate a warning, prevent selection of the inappropriate connection technique, or require a bypass password.
[0027] Target location data 228 may be used to modify the scale of the component profile data 226 and the connecting operations data 230. In addition, CPU circuit 202 may be configured to periodically check the target location data 228 to ensure a uniform orientation between cycles via detections from the imaging system 204. As such, if one of the target units 38 is moved with respect to the other target units 38, the alarm 224 may provide a visual or auditory notice to an operator and the CPU circuit 202 may generate a safety protocol to prevent any further operations until the displaced target unit 38 is realigned. In some embodiments, the target location data 228, the component profile data 226, the image capturing data 218, or a combination thereof are compared before the parts 28 are joined.
[0028] The subject invention further includes a method 300 including several steps of assembling a component out of two or more parts with a fixture assembly. The method 300 includes providing 302 a fixture assembly having at least one holding device for holding at least two parts in a specific location and orientation to form an interface surface therebetween. The method may further includes placing 304 target units at various locations on the fixture assembly to form a frame of reference. A component profile data is then selected 306 that corresponds to the component that is to be assembled. Once the component profile data is selected 306, a connecting operation data can be selected 308 based at least in part by which component profile data was selected. The method 300 further includes placing 310 at least two parts into the fixture assembly. Step 310 may further include placing 312 the at least two parts in holding devices on the fixture assembly. Step 310 may further include partially tightening 314 the holding devices. Step 310 may further include placing target units on at least one of the parts. The method 300 further include adjusting 316 the parts until they fit a component profile. Step 316 may further include capturing 318 an image of the parts, via an imaging device, and comparing 320 the captured image to the component profile. Step 320 may include capturing 322 the shape, presence, location, and orientation of the at least two parts. Therefore, if a part with a non- conforming shape is present, a notification may be generated. Step 320 may further include using the target units as a scale 324 for a frame of reference, scale, or orientation.
[0029] With reference now to a continuation of the method 300 provided on Figure 4B, the method 300 may also include generating a signal 326 to an operator that the parts match the component profile (e.g., size, location, and orientation). After the parts match the component profile, the holding devices may be completely tightened 328 and the parts can be re-compared 330 to the component profile data and the target unit data to insure no displacement during the tightening of the holding devices. After the parts are held in conformance with the component profile, a connecting operation 332 is performed. Based on the reference to the connections operations data at step 308, the method 300 further include controlling 336 a robotic arm to connect two parts. Step 336 may include welding, riveting, or other fastening operations 338 and may further include using the target unit as a reference to scale 340 the size of a connection operation between parts. The method 300 may further include using the imaging device to check 342 the quality of the connection between the at least two parts and may rely on the connection operations data when assessing the quality (e.g., weldment size and location, rivet location, etc.). [0030] Implementations the systems, algorithms, methods, instructions, etc., described herein can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors, or any other suitable circuit. In the claims, the term “processor” should be understood as encompassing any of the foregoing hardware, either singly or in combination. The terms “signal” and “data” are used interchangeably.
[0031] Further, in one aspect, for example, systems described herein can be implemented using a general-purpose computer or general-purpose processor with a computer program that, when executed, carries out any of the respective methods, algorithms, and/or instructions described herein. In addition, or alternatively, for example, a special purpose computer/processor can be utilized which can contain other hardware for carrying out any of the methods, algorithms, or instructions described herein.
[0032] It should be appreciated that the foregoing description of the embodiments has been provided for purposes of illustration. In other words, the subject disclosure it is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varies in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of disclosure.

Claims

CLAIMS What is claimed is:
1. A fixture assembly comprising: at least one holding device for holding at least two parts at a location and orientation to form an interface surface therebetween; at least one imaging device spaced from the at least one holding device for capturing at least one of a shape, the location, and the orientation of the at least two parts; a processor; and a memory device having a component profile data that includes a shape, location, and orientation of a component to be formed from the at least two parts, the memory device further containing instructions that, when executed by the processor, cause the processor to: receive the at least one capture from the at least one imaging device; compare the at least one capture from the at least one imaging device with the component profile data and generate a signal when at least one of the shape, location, and orientation of the at least two parts matches the component profile data.
2. The fixture assembly of Claim 1, wherein, in response to the at least two parts matching the component profile data, the processor is further caused to perform a connecting operation along the interface surface with a robotic arm and a connecting unit.
3. The fixture assembly of Claim 2, wherein the connecting unit performs at least one of a welding or riveting operation at the interface surface.
4. The fixture assembly of Claim 1, further including at least two target units placed on the fixing assembly for providing a scale to the at least one capture from the at least one imaging device.
5. The fixture assembly of Claim 4, wherein the target units include at least one of a color and shape that the at least one imaging device is configured to recognize.
6. The fixture assembly of Claim 5, wherein the target units include surface markings.
7. The fixture assembly of Claim 5, wherein the target units include removable bodies.
8 The fixture assembly of Claim 4, wherein the target units include location aware technology.
9. The fixture assembly of Claim 1, wherein the memory further includes connection operations data associated with the component profile data.
10. The fixture assembly of Claim 1, wherein the memory includes a plurality of component profile data, and wherein one of the component profile data is selected before comparing to the at least one capture from the at least one imaging device.
11. The fixture assembly of Claim 1, wherein the at least one imaging device is configured for light detection and ranging.
12. The fixture assembly of Claim 11, wherein the at least one imaging device includes a laser source, a fixed mirror, a rotating mirror, and a laser reader.
13. The fixture assembly of Claim 12, wherein the processor is further caused to generate a signal to the laser source to release a plurality of pulses of light.
14. The fixture assembly of Claim 13, wherein the plurality of pulses of light include one of ultraviolet (UV), infrared (IR), and near IR wavelengths.
15. The fixture assembly of Claim 1, wherein the at least imaging device includes a plurality of imaging devices.
PCT/CA2021/050710 2020-05-26 2021-05-26 Fixture with vision system WO2021237351A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063030191P 2020-05-26 2020-05-26
US63/030,191 2020-05-26

Publications (1)

Publication Number Publication Date
WO2021237351A1 true WO2021237351A1 (en) 2021-12-02

Family

ID=78745699

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2021/050710 WO2021237351A1 (en) 2020-05-26 2021-05-26 Fixture with vision system

Country Status (1)

Country Link
WO (1) WO2021237351A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115401351A (en) * 2022-08-16 2022-11-29 浙江鸿昌铝业有限公司 Welding process of aluminum alloy section

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180243897A1 (en) * 2015-08-25 2018-08-30 Kawasaki Jukogyo Kabushiki Kaisha Remote control robot system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180243897A1 (en) * 2015-08-25 2018-08-30 Kawasaki Jukogyo Kabushiki Kaisha Remote control robot system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115401351A (en) * 2022-08-16 2022-11-29 浙江鸿昌铝业有限公司 Welding process of aluminum alloy section

Similar Documents

Publication Publication Date Title
US10546167B2 (en) System and method of operating a manufacturing cell
JP7162277B2 (en) tool system
JP6310329B2 (en) Fastening guarantee system for vehicle assembly and control method thereof
WO2013099373A1 (en) Work management apparatus and work management system
EP3480552B1 (en) Apparatus for inspecting precision countersinks in aircraft structures by machine vision
Martinez et al. Automated bin picking system for randomly located industrial parts
CN103776378A (en) Non-contact type flexible on-line dimension measurement system
US11774934B2 (en) Facility diagnosis method using facility diagnosis system
US20200242413A1 (en) Machine vision and robotic installation systems and methods
KR20210019014A (en) Method and plant for determining the location of a point on a complex surface of space
WO2021237351A1 (en) Fixture with vision system
Rusli et al. Fastener identification and assembly verification via machine vision
CA2950108C (en) Lighting for industrial image processing
CN105689903A (en) SYSTEM AND METHOD FOR FORMING HOLES onto A SHEET-METAL ASSEMBLY
US20220402136A1 (en) System and Method for Robotic Evaluation
Singh et al. Vision-Sensor Fusion-Based Low-Cost Dimension Measurement System for Machining Shop Floor
JP2020197806A (en) Detection system
EP3811167B1 (en) Workbench system
KR20210120229A (en) Image-based jig inspection system and method
KR101803473B1 (en) Apparatus and method for inspecting parts using line scanning
WO2024070189A1 (en) Factor analysis device and factor analysis method
WO2023132131A1 (en) Inspection device, inspection method, and program
JP5123127B2 (en) Image processing device for determining the position of a workpiece
KR101991277B1 (en) An automobile parts quality assurancing method using markers and a device thereof
JP2021043769A (en) Work monitoring system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21812456

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21812456

Country of ref document: EP

Kind code of ref document: A1