CN117616247A - System and method for error correction of video extensometer - Google Patents

System and method for error correction of video extensometer Download PDF

Info

Publication number
CN117616247A
CN117616247A CN202280048408.9A CN202280048408A CN117616247A CN 117616247 A CN117616247 A CN 117616247A CN 202280048408 A CN202280048408 A CN 202280048408A CN 117616247 A CN117616247 A CN 117616247A
Authority
CN
China
Prior art keywords
imaging device
sample
data
sensor
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280048408.9A
Other languages
Chinese (zh)
Inventor
艾德里安·查尔斯·里迪克
迈克尔·阿什曼
威尔逊·维莱加斯
马丁·艾伦·佩特森
埃里克·斯蒂伯
理查德·巴尔卡雷克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Illinois Tool Works Inc
Original Assignee
Illinois Tool Works Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/739,243 external-priority patent/US20220381660A1/en
Application filed by Illinois Tool Works Inc filed Critical Illinois Tool Works Inc
Priority claimed from PCT/US2022/028440 external-priority patent/WO2022250947A1/en
Publication of CN117616247A publication Critical patent/CN117616247A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present disclosure describes systems and methods of compensating for errors in video extensometer systems, including noise, stereo variation, and/or component placement and/or operation.

Description

System and method for error correction of video extensometer
Related application
The present application claims the benefit and priority of U.S. provisional application No. 63/180288 entitled "System and method for error correction for video extensometer (Systems And Methods For Error Correction For Video Extensometers)" filed on day 27, 5, 2021. The entire subject matter and content of U.S. provisional application No. 63/180288 is incorporated herein by reference in its entirety.
Background
Camera-based vision systems have been implemented as part of material testing systems that measure specimen strain. These systems collect images of one or more samples under test that are synchronized with other signals of interest in the test (e.g., sample load, machine actuator/cross head displacement, etc.). As the test progresses, images of the test specimen may be analyzed to locate and track specific features of the specimen. The change in position of these features, for example, the relative change in position of one or more reference features of the sample, allows for the calculation of local sample deformation and thus sample strain.
Conventional systems employ cameras or other imaging systems to capture images to measure characteristics of the sample. However, imaging and/or measurement discrepancies between the sensed reference and actual positions may lead to reading distortion and measurement inaccuracies. Therefore, a system for correcting such errors is needed.
Disclosure of Invention
Systems and methods for correcting and/or compensating for imaging errors in video extensometer systems are disclosed herein. These and other features and advantages of the present invention will become apparent from the following detailed description and appended claims.
Drawings
The benefits and advantages of the present invention will become more readily apparent to those of ordinary skill in the relevant art after reviewing the following detailed description and accompanying drawings, wherein:
FIG. 1 is a block diagram of an example extensometer system in accordance with aspects of the present disclosure.
FIG. 2 is an example sample for making measurements in the extensometer system of FIG. 1, in accordance with aspects of the present disclosure.
FIG. 3 is a block diagram of another view of the example extensometer system of FIG. 1 in accordance with aspects of the present disclosure.
Fig. 4 is a block diagram of an example embodiment of the extensometer system of fig. 1 in accordance with aspects of the present disclosure.
Fig. A1-A5 are block diagrams of example extensometer systems according to aspects of the present disclosure.
Fig. B1-B3 are block diagrams of example extensometer systems according to aspects of the present disclosure.
Fig. C1-C8 are block diagrams of example extensometer systems according to aspects of the present disclosure.
Fig. D1 and D2 are block diagrams of example extensometer systems according to aspects of the present disclosure.
The figures are not necessarily drawn to scale. Wherever appropriate, like or identical reference numerals have been used to designate like or identical elements.
Detailed Description
The present disclosure describes systems and methods of compensating for errors in video extensometer systems, including noise, viewing angle variations, and/or component placement and/or operation.
In particular, the disclosed examples provide systems and methods for coping with noise by using a fluid delivery system, a vibration control system, and/or saturation control in a video extensometer system. Further, the disclosed examples provide systems and methods that address imaging challenges by using multiple cameras in a video extensometer system. Further, the disclosed examples provide systems and methods for monitoring sample changes by monitoring reference patterns and/or objects in a video extensometer system. Further, the disclosed examples provide systems and methods for mitigating heat and/or external vibrations by employing compensation techniques (including active, passive, and/or processing) in a video extensometer system.
Conventional systems are subject to one or more errors in testing and measuring one or more physical properties of a sample. Errors may be due to system component constraints (e.g., physical/operational constraints of the component, operational impact on related components, etc.), system calibration (e.g., for measuring different materials/samples), and/or measurement and/or analysis constraints (e.g., collection and analysis of measured characteristics, etc.).
Some conventional testing systems employ camera-based vision systems to capture information (e.g., measurements of one or more characteristics or geometric variables) during a material test (e.g., to determine strain of a specimen). Such a system may capture multiple images of the specimen and synchronize these images with other information related to the testing process (e.g., specimen load, machine actuator/cross head displacement, etc.). The image of the sample may then be analyzed by one or more algorithms to identify and/or locate specific features (including reference features) of the sample, and to track these features as the test operation proceeds. The change in absolute and/or relative positions of these features allows for the calculation of local sample deformation and thus sample strain.
The sample feature of interest may include a marker (e.g., a reference feature) affixed to the surface of the sample that is visible to the camera. For example, the processor may analyze the images to determine the location and/or geometry of the markers (and any changes thereto), and track the markers as they move relative to one another during testing. There may be multiple markings on the front of the specimen, such as paired sets (axial markings, transverse markings, etc.) for determining strain measurements based on gauge length, or a quasi-random speckle pattern used with Digital Image Correlation (DIC) techniques. Another group of features that may be of interest in determining the strain of a transverse specimen is the edges of the specimen.
For single or multiple camera measurement systems, the calibration process may be performed on a selected calibration plane at a predetermined distance from the image sensor. The calibration process establishes a relationship between one or more characteristics (e.g., size, position, width, etc.) captured by the imaging device and one or more physical characteristics (e.g., determined in physical coordinates) on the calibration plane.
Such a calibration process may use a calibration reference device positioned on a calibration plane. The reference device includes predetermined physical characteristics having known geometries associated with covering some or all of a field of view (FOV) of interest. The calibration process enables an image of the calibration device to be captured and compared to known calibration device geometries while a transfer function is established to convert the image coordinates from a pixel coordinate system to a physical coordinate system of the real world.
Conventional video extensometer systems track and measure the size and/or relative position of marks on the surface of a sample. During testing, an image processing algorithm is performed (e.g., by a processor of a video extensometer system) to determine the location of the marker on the sample surface. Based on the determined position, the processor may calculate an initial specimen gauge length as well as a transient change in specimen gauge from the value at the beginning of the specimen (e.g., axial and/or lateral strain). The accuracy with which the video extensometer system can measure the absolute and/or relative position and/or change in position of the markers depends at least in part on whether the sample surface is coplanar with the initial calibration plane. The difference between the positions of the measurement plane (corresponding to the sample surface) and the calibration plane (corresponding to the reference plane) will produce a measurement error (e.g. a stereo error). As the deviation between the measurement plane and the reference plane increases (e.g., along the Z-axis between the specimen and the camera), larger measurement errors may result.
In some examples, a plurality of samples are subjected to a test procedure after a calibration procedure, each sample having a different thickness, and/or the thickness of the samples varies during the test procedure. Thus, the distance between the sample surface and the imaging device will vary during testing and/or for each sample.
In material testing applications where absolute dimensional measurements are sought, such stereo errors may be more problematic than, for example, testing applications where the measured values are used to determine proportional (e.g., ratio) strains. In proportional strain measurements, the stereo error results in a similar proportional error in the initial gauge length measurement and/or the strain displacement measurement. Since strain is calculated as a displacement over the gauge length, the stereo errors are counteracted because they exist in both the numerator and denominator. These stereo errors may also affect the accuracy of the smaller strains, where the stereo errors may dominate the desired strain measurement signal.
Conventional systems attempt to alleviate some of these problems by a variety of techniques, each of which has significant drawbacks. One option is to arrange the calibration plane in the mean or median plane of all test planes of interest, so as to optimize the stereo measurement error between samples with different thicknesses. Another option is to make physical adjustments to the sample mounting locations of the extensometers to compensate for different samples in an attempt to maintain a single working distance equal to the calibration plane. Another option is to use telecentric optics, which are less sensitive to out-of-plane stereo errors, but more expensive and have a more limited field of view. In yet another option, multiple cameras may be used to capture stereo information from different angles, which may be included in the calibration and sample measurement process.
However, existing solutions have drawbacks in mitigating the stereoscopic errors encountered in video extensometer systems. For example, using an average distance as the calibration plane and/or using a less accurate measurement device necessarily results in less accurate measurements. Manually adjusting the extensometer mounting position to compensate for the different sample thicknesses is time consuming and requires the operator to remember to make multiple different adjustments to each sample depending on its individual thickness all the time. Furthermore, such adjustment is difficult to automate.
Telecentric optics are bulky, heavy, expensive, and have a limited field of view (FOV). Thus, video extensometer systems utilizing multiple cameras are expensive, complex, and require a large number of three-dimensional calibration processes and equipment.
The disclosed systems and methods mitigate systematic and deterministic errors in video extensometers, as a list of non-limiting examples, errors caused by changes in Z-axis distance, changes in measurement plane relative to calibration plane, external noise (including heat and vibration). In some examples, these errors may be corrected in real time during the test.
As described herein, a material testing system, including a material testing system that applies tension, compression, and/or torsion, includes one or more components that create a displacement and/or load bearing to apply stress and/or measure stress on a specimen. In some examples, a video extensometer system is used in specimen strain testing, which may include one or more of acquiring a high resolution image, providing the image to an image processor, analyzing the image to identify one or more specimen characteristics corresponding to displacement or strain values, and generating an output corresponding to the characteristics.
Video processing using an extensometer may include an external machine vision imaging device connected to a processing system or computing platform and/or video processing hardware and converting data from the camera into electrical signals using software and/or hardware or having a software interface compatible with the material testing system.
As disclosed herein, an imaging device is implemented in a material testing system that employs a camera-based image capture (e.g., visual or video) system for measuring strain on a specimen. Such systems collect multiple images of the sample under test (i.e., during the test), which are synchronized with other signals of interest in the test (e.g., sample load, machine actuator and/or cross-head displacement, etc.). The image of the sample is analyzed by an algorithm (e.g., in real time and/or after testing) to locate and track specific sample characteristics as the test proceeds. For example, variations in the location, size, shape, etc. of such characteristics allow for calculation of sample deformation, which in turn results in analysis and calculation of sample strain.
Accordingly, the systems and methods disclosed herein compensate for errors in video extensometer systems, including noise, stereo variation, and/or component placement and/or operation.
In a disclosed example, a system for correcting errors in a test procedure in a video extensional measurement system, the system comprising: a test system for fixing a sample; an imaging device arranged to capture an image of a surface of the sample; one or more sensors configured to measure one or more parameters associated with the sample; and a processing system for receiving image data from the imaging device; receiving sensor data from the one or more sensors; comparing the image data or the sensor data to one or more data thresholds; calculating a correction factor based in part on the image information and the sensor information in response to the image data or the sensor data violating the one or more data thresholds; and issuing instructions to adjust the imaging device and system components based at least in part on the correction factor.
In some examples, the system component is an active cooler, an actuator, or an imaging device positioning system. In an example, the imaging device is a single vision camera. In an example, the imaging device is two or more cameras.
In some examples, the sensor is an accelerometer, an inertial measurement unit, a temperature sensor, an infrared sensor, a light emitting diode sensor, an ultrasonic sensor, or a laser sensor. In an example, the one or more parameters include one or more of a shape or position of the mark, an edge position of the sample, or a width of the sample. In an example, the correction factor is one of millimeters, inches, or pixel units.
In some examples, the processing system is located on a remote computing platform in communication with one or more of the testing system or the imaging device. In an example, the processing system is integrated with one of the imaging device or the testing system.
In some disclosed examples, a system for correcting errors in a test procedure in a video extensional measurement system, the system comprising: a test system for fixing a sample; an imaging device arranged to capture an image of a surface of the sample; one or more motion sensors configured to measure one or more motion parameters associated with the video extension measurement system; one or more actuators for adjusting a position or orientation of the imaging device; and a processing system for receiving image data from the imaging device; receiving sensor data corresponding to vibration measurements from the one or more motion sensors; comparing the image data or the sensor data to one or more data thresholds; responsive to the sensor data violating one or more data thresholds, calculating a correction factor based in part on the image data and the sensor data; and command the one or more actuators to adjust a position or orientation of the imaging device based at least in part on the correction factor.
In some examples, the one or more motion sensors include an accelerometer, an inertial measurement unit, a vibration sensor, or a tilt sensor. In an example, one or more motion sensors are disposed proximate to one or more image sensors to monitor and measure vibrations at the one or more image sensors.
In some examples, the one or more data thresholds correspond to one or more data threshold vibration values, and the processing system is further operable to associate the image data with vibration measurements that violate the one or more threshold vibration values. In an example, the processing system is further operable to apply a compensation factor to the image data at the relevant data points exceeding the one or more threshold vibration values to correct for excessive vibration.
In some examples, the system includes a drive and control system to receive instructions from the processing system to control the one or more actuators. In an example, the actuator 104 may include a piezoelectric actuator with a mechanical amplifier.
In some examples, measurement and compensation calculations may be performed in real-time during imaging operations. In an example, active vibration control may be implemented along with image data compensation.
In some examples, the processing system is located on a remote computing platform in communication with one or more of the testing system or the imaging device. In an example, the imaging device comprises two or more imaging devices, each imaging device operable to capture an image of the sample surface.
Referring now to the drawings, FIG. 1 is an example extensometer system 10 for measuring changes in one or more characteristics of a specimen 16 undergoing mechanical performance testing. The exemplary extensometer system 10 may be connected to a test system 33, such as one that is capable of mechanically testing the specimen 16. The extensometer system 10 may measure and/or calculate changes in the test specimen 16 subjected to, for example, a compressive strength test, a tensile strength test, a shear strength test, a flexural strength test, a tear strength test, a peel strength test (e.g., strength of an adhesive), a torsional strength test, and/or any other compressive and/or tensile test. Additionally or alternatively, the material extensometer system 10 may perform dynamic testing.
According to the disclosed examples, as further illustrated in fig. 4, the extensometer system 10 may include a testing system 33 for manipulating and testing the specimen 16, and/or a computing device or processing system 32 communicatively coupled to the testing system 33, light source, and/or imaging device. The test system 33 applies a load to the test specimen 16 and measures the mechanical properties of the test, such as displacement of the test specimen 16 and/or force applied to the test specimen 16.
Extensometer system 10 includes a remote and/or integral light source 14 (e.g., an array of LEDs) for illuminating a sample 16 and/or a reflective back screen 18. Extensometer system 10 includes a processing system 32 (see also FIG. 4) and a camera or imaging device 12. Although the example of fig. 1 shows a single camera 12, the disclosed examples are applicable to multiple camera extensometer systems 10. In some examples, the light source 14 and the imaging device 12 are configured to transmit and receive in the Infrared (IR) wavelength range; however, other light emitting sources and/or wavelengths are equally suitable. In some examples, one or both of light source 14 or imaging device 12 includes one or more filters (e.g., polarizing filters), one or more lenses. In some examples, a calibration routine (e.g., a two-dimensional calibration routine) is performed to identify one or more features of the specimen 16, additionally using one or more markers 20 (including a pattern of markers).
In the disclosed examples, the computing device 32 may be used to configure the test system 33, control the test system 33, and/or receive measurement data (e.g., transducer measurements such as force and displacement) and/or test results (e.g., peak force, fracture displacement, etc.) from the test system 33 for processing, display, reporting, and/or any other desired purpose. Extensometer system 10 is connected to test system 33 and software using a standard interface including Ethernet, analog, encoder, or SPI. This allows the device to be plugged into and used by existing systems without the need for specialized integrated software or hardware. Extensometer system 10 provides axial and lateral encoders or analog information to material testing machine 33 in real time. The real-time video extensometer 10 and the material testing machine 33 exchange real-time test data, including extension/strain data, with an external computer 32, which external computer 32 may be configured via wired and/or wireless communication channels. The extensometer system 10 provides measurements and/or calculations of the extension/strain data captured from the specimen 16 being tested in the material testing machine 33, which in turn provides the stress and extension/strain data to the processor 32.
As disclosed herein, the captured images are input from the imaging device to the processor 32, wherein one or more algorithms and/or look-up tables are employed to calculate extension/strain values for the multiple axes of the specimen 16 (i.e., the change or percentage change in the calculated inter-target distance by monitoring the images of the markers 20 attached to the specimen 16). After the calculation, the data may be stored in memory or output to a network and/or one or more display devices, I/O devices, etc. (see also fig. 4).
Fig. 2 is an exemplary specimen 16 for use in making measurements in the extensometer system 10 of fig. 1. For example, one or more indicia 20 (e.g., reference features) are applied to a surface 28 facing the light source 14 and the imaging device 12. The clamping section 26 is configured to be placed within a fixture of the test system 33 (see also fig. 4) and apply a force to the test specimen 16. For example, the cross-member loader applies a force to the specimen 16 being tested while the clamp grips the specimen 16 or otherwise connects the specimen 16 to the testing system 33. An applicator, such as a motor, moves the crosshead relative to the frame to apply a force to the sample 16, as indicated by double arrow 34. The force 34 pulling the clamping sections 26 away from each other may extend the specimen 16, causing the marker to move from the first position 20A to the second position 20B. Additionally or alternatively, the indicia may change shape or size, which may also be measured by the processing system 32 from the captured image. Force 34 may also move the edge of the sample from first position 22A to second position 22B. For example, in a first or initial position, the edge has a width 24A that decreases to a width 24B when a force 34 is applied.
Based on the captured images, the processing system 32 is configured to implement an elongation/strain measurement process. For example, to detect elongation/strain on the specimen 16, the processing system 32 monitors images provided via the imaging device 12. When processing system 32 identifies a change in the relative position between two or more markers and/or edges of specimen 16 (e.g., as compared to an initial position at the beginning of the crosshead movement), processing system 32 measures the amount of change to calculate the amount of elongation and/or strain on specimen 16. As disclosed herein, the marker is configured to reflect light from the light source to the camera, while the back screen reflects light to generate a dark profile for edge analysis.
As disclosed herein, the video extensometer system 10 is configured to perform optical width measurements of a non-transparent specimen 16. The imaging device 12 is arranged to view a surface 28 of the sample 16 facing the imaging device 12, which surface 28 is close to the focal plane of the imaging device optics (see e.g. fig. 3).
As shown in fig. 3, the video extensometer system 10 is arranged to measure one or both of axial strain (based on variations in the markings 20 and/or the pattern of markings on the front surface 28 of the specimen 16) and lateral strain (calculated from variations in the width of the specimen 16). The various components of the video extensometer system 10 are shown in top view in fig. 3, showing the general location of each component relative to the other components. As shown, these components include an imaging device 12 (e.g., a camera), which imaging device 12 is configured to capture one or more images of a specimen 16 during a physical test (e.g., at regular intervals, continuously, and/or based on one or more thresholds associated with time, force, or other suitable test characteristics).
As shown, imaging device 12 and specimen 16 are disposed at a working distance or Z-axis distance 39, which may be static, predetermined, and/or variable during testing.
The specimen 16 is provided with suitable indicia or reference features 20 on a forward surface 28 (and/or an opposite surface) of the specimen 16. Analysis of one or more images associated with video extensometer system 10 is accomplished by processing system 32 to perform an identification algorithm to allow for continuous tracking and measurement of both specimen 16 markers 20 and specimen edges 22 during testing.
In the illustrated example, the imaging device 12 is a single vision camera having a single optical axis 50. In some examples, two or more imaging devices may be employed, which may be juxtaposed and/or disposed with different perspectives of the specimen 16. By employing a stereoscopic imaging arrangement, the characteristics of the specimen 16 may also be further calibrated and/or measured using measurement variables associated with the viewing angles and/or depths of field of the multiple dimensions of the specimen 16.
In some examples, the measured value and/or location of one or more edges is provided in pixel coordinates as captured by imaging device 12. Additionally or alternatively, the measured value and/or position of one or more edges is provided in other standard coordinate systems/units (e.g. meters). In such examples, a calibration process may be implemented to determine the absolute and/or relative arrangement and/or size of the specimens within the test system prior to measurement, and a similar coordinate system/unit may be used in the test process.
Fig. 4 is a block diagram of the exemplary extensometer system 10 of fig. 1. As shown in FIG. 1, extensometer system 10 includes a test system 33 and a computing device 32. The example computing device 32 may be a general purpose computer, a laptop computer, a tablet computer, a mobile device, a server, an all-in-one computer, and/or any other type of computing device. The computing device 32 of fig. 4 includes a processor 202, which processor 202 may be a general purpose Central Processing Unit (CPU). In some examples, processor 202 may include one or more special purpose processing units, such as an FPGA, RISC processor with ARM core, graphics processing unit, digital signal processor, and/or system on a chip (SoC). The processor 202 executes machine-readable instructions 204, which may be stored locally to the processor (e.g., in an included built-in cache or SoC), random access memory 206 (or other volatile memory), read-only memory 208 (or other non-volatile memory such as flash memory), and/or mass storage 210. The exemplary mass storage device 210 may be a hard disk drive, a solid state storage drive, a hybrid drive, a RAID array, and/or any other mass data storage device. Bus 212 enables communication among processor 202, RAM 206, ROM 208, mass storage 210, network interface 214, and/or input/output interface 216.
The example network interface 214 includes hardware, firmware, and/or software for connecting the computing device 201 to a communication network 218, such as the internet. For example, the network interface 214 may include IEEE 202.X compliant wireless and/or wired communication hardware for transmitting and/or receiving communication data.
The example I/O interface 216 of fig. 4 includes hardware, firmware, and/or software for connecting one or more input/output devices 220 to the processor 202 to provide input to the processor 202 and/or to provide output from the processor 202. For example, the I/O interface 216 may include a graphics processing unit for interfacing with a display device, a universal serial bus port for interfacing with one or more USB compatible devices, a firewire, a fieldbus, and/or any other type of interface. The example extensometer system 10 includes a display device 224 (e.g., an LCD screen) coupled to the I/O interface 216. Other example I/O devices 220 may include a keyboard, a keypad, a mouse, a trackball, a pointing device, a microphone, an audio speaker, a display device, an optical media drive, a multi-touch screen, a gesture recognition interface, a magnetic media drive, and/or any other type of input and/or output device.
Computing device 32 may access non-transitory machine-readable medium 222 through I/O interface 216 and/or I/O device 220. Examples of machine-readable media 222 of fig. 4 include compact discs (e.g., compact Discs (CDs), digital versatile/video discs (DVDs), blu-ray discs, etc.), magnetic media (e.g., floppy disks), portable storage media (e.g., portable flash drives, secure Digital (SD) cards, etc.), and/or any other type of removable and/or installable machine-readable media.
Extensometer system 10 also includes a test system 33 connected to computing device 32. In the example of fig. 4, test system 33 is coupled to the computing device through I/O interface 216, such as through a USB port, thunderbolt port, fireWire (IEEE 1394) port, and/or any other type of serial or parallel data port. In some examples, test system 33 is coupled to network interface 214 and/or I/O interface 216 directly via a wired or wireless connection (e.g., ethernet, wi-Fi, etc.) or via network 218.
The test system 33 includes a frame 228, a load cell 230, a displacement transducer 232, a cross member loader 234, a material fixture 236, and a control processor 238. The rack 228 provides rigid structural support for other components of the test system 33 that perform the tests. The load sensor 230 measures the force applied by the cross member loader 234 to the material to be tested via the clamp 248. The cross-member loader 234 applies a force to the material under test, and the material fixture 236 (also referred to as a clamp) grips or otherwise couples the material under test to the cross-member loader 234. The example cross-member loader 234 includes a motor 242 (or other actuator) and a crosshead 244. As used herein, a "cross-head" refers to a component of a material testing system that applies directional (axial) and/or rotational forces to a specimen. The material testing system may have one or more crossheads, and the crossheads may be located at any suitable position and/or orientation in the material testing system. A crosshead 244 couples the material fixture 236 to the frame 228 and a motor 242 moves the crosshead relative to the frame to position the material fixture 234 and/or apply a force to the material to be tested. Example actuators that may be used to provide force and/or movement of components of extensometer system 10 include motors, pneumatic actuators, hydraulic actuators, piezoelectric actuators, relays, and/or switches.
While the example test system 33 uses a motor 242, such as a servo or direct drive linear motor, other systems may use different types of actuators. For example, hydraulic actuators, pneumatic actuators, and/or any other type of actuator may be used based on the requirements of the system.
Example fixtures 236 include a platen, jaw, or other type of fixture, depending on the mechanical properties tested and/or the materials tested. The fixtures 236 may be manually configured, controlled via manual inputs, and/or automatically controlled by the control processor 238. The crosshead 244 and the fixture 236 are operator accessible components.
Extensometer system 10 may further include one or more control panels 250 including one or more mode switches 252. The mode switch 252 may include buttons, switches, and/or other input devices located on an operator control panel. For example, the mode switch 252 may include a button that controls the motor 242 to slowly move (e.g., position) the crosshead 244 to a particular location on the frame 228, a switch (e.g., a foot switch) that controls the clamp actuator 246 to close or open the pneumatic clamp 248, and/or any other input device that controls the operation of the test system 33.
The example control processor 238 communicates with the computing device 32, for example, to receive test parameters from the computing device 32 and/or to report measurement results and/or other results to the computing device 32. For example, the control processor 238 may include one or more communication or I/O interfaces to enable communication with the computing device 32. The control processor 238 may control the cross-member loader 234 to increase or decrease the applied force, control the fixture 236 to grasp or release the material under test, and/or receive measurements from the displacement transducer 232, the load cell 230, and/or other transducers.
The example control processor 238 is configured to perform an elongation/strain measurement process when the specimen 16 is tested in the test system 33. For example, to detect elongation/strain on the specimen 16, the control processor 238 monitors images provided via the imaging device 12. When the control processor 238 identifies a change in the position and/or location of the edge 22 of the specimen 16 (e.g., as compared to an initial position at the beginning of movement of the crosshead 244), the control processor 238 measures the amount of change to calculate the amount of elongation and/or strain on the specimen 16. For example, the absolute position of the real-time video capture edge 22 provided by the imaging device 12 and their relative movement over the course of several images is monitored to calculate elongation/strain in real-time. Stress data and strain data are exchanged between the real-time video extensometer 10, the testing system 33, and the processing system 32, and are typically consolidated and displayed via the display device 224.
Fluid delivery, vibration control, vibration compensation, saturation control
Fluid delivery
Some example systems operate in an environment that includes an imaging device or camera, illumination, a test platform, and a sample. The operation of the system may generate heat and/or receive heat from the environment, which may result in temperature differences within the test environment. For example, a thermal boundary layer may form in the air near the lens (and/or at one or more locations between the lens and the specimen) resulting in air having a varying density in the region directly in front of the lens (and/or at one or more locations between the lens and the specimen). This in turn increases the likelihood of measurement errors during imaging operations due to optical distortion effects (e.g., "phantom" errors, optical refraction, etc.).
In some disclosed examples, as shown in fig. A1, a directional gas outlet 64 (e.g., a fluid discharge, an air/gas nozzle, an air/gas knife, etc.) provides one or more gases/fluids from the fluid source 62 to displace, mitigate, condition, and/or remove air and the generated thermal boundary layer (hot and/or cold). In this way, the gas outlet 64 may mix the air in front of the lens with ambient air in the test area. Thus, the variation in air density (e.g., temperature difference) is reduced, which reduces the likelihood of phantom errors.
In some examples, the use of gas outlets 64 may increase the frequency associated with noise observed during imaging operations by moving and/or mixing air (at or near the lens, between the lens and the sample, etc.). The increased frequency allows for ease of filtering (e.g., by digital filters, software and/or hardware filters, etc.) to eliminate noise effects in the processed data (e.g., imaging measurements, etc.).
In some examples, the use of the gas outlet 64 serves to redistribute air at different temperatures, including by applying gas/air at a temperature within a threshold amount of ambient temperature. In addition, the application of gas/air may clean the lens of dust, particles, condensate and/or other objects (e.g., worms) that may be attached to the lens.
Active vibration control-A
Some example systems operate in an environment that includes cooling fans and external sources that may affect the operation of system components (e.g., cameras, lighting, test platforms, and specimens). The operation of the system may generate vibrations which may lead to movement of the image sensor/lens relative to the sample, which in turn leads to noisy image data during testing. What has a significant impact on the test data is the vibration mode which does not produce a common mode effect.
In the disclosed example, as shown in fig. A2, an active vibration canceling unit 68 may be installed within the system to mitigate vibrations locally with respect to the image sensor 70, thereby reducing vibrations and/or induced noise on the image signal.
In some examples, one or more linear and/or rotational axes may be measured and/or mitigated. For example, one or more sensors (e.g., for measuring acceleration, optical adjustment, etc.) may be employed, with a plurality of associated actuators for responding to vibrations in each monitoring shaft.
In alternative or additional examples, a tuned mass damper system (e.g., a passive system) may be used with (or in place of) an active vibration cancellation module.
Active vibration control-B
In an additional or alternative example, as shown in fig. A3, the PCB 66 with the image sensor 70 mounted thereon is itself mounted to another substrate 72 or housing wall. The active vibration canceling unit 68 may be mounted with the PCB 66, the PCB mounting fasteners 74, and/or between the PCB 66 and the PCB mounting fasteners 74. In accordance with the disclosed example, the active vibration canceling unit 68 may respond to vibrations (e.g., based on motion sensor feedback, etc.) to mitigate vibrations local to the image sensor 70, thereby reducing induced noise on the image signal.
Further, in some examples, the active vibration canceling unit 68 may operate concurrently with and/or be replaced by a tuned mass damper system. As used herein, a tuned mass damper system (e.g., a harmonic absorber or seismic damper) is a device or system that may be connected to or otherwise mounted to PCB 66 and/or substrate 72 and is used to reduce vibration. The tuned mass damper system may include a mass element mounted on a damper spring, the oscillation frequency of which is tuned during operation to be similar to the resonant frequency of the system.
Active vibration compensation
In an additional or alternative example, as shown in fig. A4, a motion sensor 76 (e.g., an inertial measurement unit, accelerometer, etc.) may be mounted with the PCB 66, such as disposed near the image sensor 70 (e.g., on a common surface, an opposing surface, etc., of the PCB 66). In some examples, the measurements from the motion sensor 76 are provided to a control circuit or other processor (e.g., the processing system 32, the processor 202, the control processor 238, etc.). The processing system 32 correlates the timing of the imaging process with the measurements from the motion sensor. For example, if the measurement from motion sensor 76 exceeds a filter threshold (e.g., based on physical motion, data/image set tolerance, etc.), processing system 32 may compensate for errors from excessive motion, such as by firmware, software, and/or hardware techniques.
Illumination compensation
Some example systems are configured to optimize illumination applied to the system to meet specific illumination criteria. For example, sufficient illumination at the specimen reduces noise and/or imaging errors during the imaging operation.
In some systems, the amount of illumination (e.g., intensity, saturation, etc.) is fixed, limited to manual adjustment of the light source, and/or cannot be adjusted during the imaging operation. The reduction in image accuracy and precision due to non-optimal illumination conditions, including, for example, image saturation, may be due to the above-described limiting factors.
To address these shortcomings, the disclosed system is configured to measure the image saturation of each specimen 16 prior to the start of the test operation, as shown in fig. A5. As image manipulation proceeds, the image data is analyzed (e.g., by image sensor 70 at camera 12, at processing system 32, etc.) and compared to one or more thresholds (e.g., light intensity at the image, etc.), in response to the threshold being violated, the intensity of light source 14A may be automatically adjusted to provide a desired level of image saturation for the sample under test. In some examples, one or more light sensitive sensors (e.g., at the sample 16, at the camera 12, etc.) are used to measure the light intensity, which the processing system 32 may use to determine the adjustment value of the light source.
Z-axis motion
In some video extensometer systems, the optical axis from the camera to the sample under test is referred to as the "Z-axis" and the sample is imaged in the X-Y plane. When using an inner lens, measured changes in the distance between the camera and the specimen will change the imaging dimensions of the specimen. In the case of a video extensometer that measures displacement between multiple reference features (e.g., two, four, etc.), a change in the Z-axis position of the sample will result in a change in the imaging dimension between the reference points that is independent of the test operation, e.g., resulting in an error in the strain measurement. However, the disclosed examples provide various methods and systems to reduce and/or eliminate errors associated with Z-axis errors.
Inner lens
In some example systems, the video extensometer system employs one or more conventional optical lenses having an angular field of view. Thus, imaging may be affected by parallax errors, which increase or decrease the magnification of the measured specimen as the object moves toward or away from the lens. In the disclosed example, a video extensometer system employing one or more telecentric lenses mitigates this error by having a non-angular and constant field of view.
In some disclosed examples, the video extensometer system employs two or more cameras, at least one of which is equipped with a telecentric lens (50-90 mm, e.g., camera 12B of fig. C6) having a relatively small field of view. Additional cameras (e.g., camera 12 as shown) employ an inner lens with a different field of view, which may be greater than the field of view associated with the telecentric lens.
Errors caused by Z-axis motion of the specimen during the imaging operation are typically more pronounced in the initial stages of the testing process (as compared to the later stages). In the disclosed system, the first camera using a telecentric lens will not experience changes in image size (e.g., for one or more reference features) caused by Z-axis motion. As the strain applied to the specimen increases (e.g., exceeds a threshold strain amount, specimen swells, time, and/or response command), the system transitions to analysis using measurements taken with the remaining cameras of the inner lens.
Projecting a pattern
In some examples, the video extensometer system may measure Z-axis motion by analyzing changes associated with a feature of the sample that is independent of deformation of the sample. For example, an image or other feature may be projected onto the surface of the sample under test. For example, the laser 78 and/or other types of projectors may project features (e.g., points, lines, patterns, etc.) as provided in fig. B1. The lens 15 and the image sensor 70 may measure the Z-axis motion by measuring the change and/or displacement of the projected features, for example by using a known angle α between the projected light and the sample surface.
For example, the system 10 may use the sensor 70 to measure one or more characteristics of the test system 10 arrangement. For example, the sensor 70 may employ one or more techniques (e.g., infrared (IR) light, light Emitting Diode (LED) output, ultrasonic sensors, structured light imaging, time of flight calculations, laser sensors, etc.) to sense a measured distance between the imaging device 12 and the specimen 16 along the Z-axis. The results may be transmitted from the sensor 70 to a processor circuit or computing device (e.g., to the processing system 32 via an interface) for analysis. The processing circuitry may then generate and apply correction factors based on the differences in the Z-axis.
Thus, the camera 12, image sensor 70, and processing system 32 may be used to both track the reference features of the specimen and process the measurements of the projected features/patterns. The change in projection characteristics may be calculated at processing system 32 to determine the amount of Z-axis motion. Error correction values may then be calculated and/or determined (e.g., by referencing a list of corresponding Z-axis changes of the compensation factors, etc.) and applied to the measured values associated with the corresponding changes of the reference features to improve test results.
Structured light projector
In one example, a projection method (e.g., a digital light processing projector) may be used to project a predetermined pattern onto the surface of the specimen. During the imaging operation, Z-axis movement of the sample will cause pattern distortions (e.g., grid, parallel line sets, etc.). Thus, imaging and measuring geometric deformations of the pattern can provide data related to Z-axis motion. Advantageously, the deformation may provide information associated with other deformations during the test operation, such as bending of the sample.
Laser triangulation sensor
In some disclosed examples, as shown in fig. B2, a laser triangulation sensor 88 is used to detect Z-axis movement of the sample 16. For example, the laser light source 82 may generate laser light 87, the laser light 87 being directed toward the sample 16 via one or more lenses 84. The reflected laser light 87A is received at a light receiving element 88 (e.g., a photosensor) configured to generate a signal proportional to the magnitude of the Z-axis movement. The signal from the receiving element 88 is analyzed at the processing system 32 to calculate a compensation factor to correct for the movement.
For example, one or more characteristics of the received light (e.g., phase, intensity, frequency, etc.) may be associated with the change in the Z-axis.
Reference object
In some example systems, as shown in fig. B3, the reference scale object 90 may be positioned with the specimen 16 and/or near the specimen 16. Upon testing the specimen 16, the absolute and/or relative position of the reference feature 14 changes while the reference scale object 90 and associated reference feature 14A remain stationary. The image sensor 70 may capture data about the specimen 16 and/or the reference scale object 90 during an imaging operation.
For example, any change in the absolute and/or relative dimensions of the reference scale object 90 is caused by the Z-axis movement, which can be calculated and corrected. Further, if a Z-axis change is detected in the sample 16 and not in the reference scale object 90, a relative change between the two objects may indicate a Z-axis change of the sample 16, which may be calculated via the processing system 32.
Multiple cameras
As a tool to provide high precision results, video extensional measurement systems face multiple challenges. Typically, strain calculation during extension testing assumes that the measurements made during testing are accurate measurements of the (typically two-dimensional) changes in the specimen. These dimensions represent how the sample shape varies in the axial dimension (X-axis) and in the transverse dimension (Y-axis). In video elongation measurements, a camera provides two-dimensional measurements by positioning and focusing a camera on the sample surface to directly observe the sample's changes in the X-axis and Y-axis.
However, in an actual test environment, the Z-axis may also change (e.g., the distance from the camera to the specimen), which may affect the accuracy of the X and Y measurements.
For example, as shown in FIG. C7, when the specimen 16 is installed before the test begins, the specimen may be located at a different Z distance 39B from the camera 12 than the Z distance 399A from the specimen was last calibrated by the camera 12. As shown in fig. C8, when the specimen is installed before the start of the test, the specimen may be installed not perfectly vertical such that the upper portion 97A of the specimen and the lower portion 97B thereof are separated from the cameras by different Z distances 39C, 39D, respectively. At the beginning of the test, the sample itself may not have a consistent Z distance/shape between the sample surfaces. For example, the middle portion may bend due to the way the sample is formed or physical stresses generated when the test is set. As the tracked reference feature moves in the Z-axis relative to the calibration plane, a stereoscopic error (e.g., illusive strain) may be created. Such stereo errors may have a dominant effect on the real data, especially in small strained areas (e.g. elastic areas of relatively rigid material), resulting in inaccurate measurements.
During testing, one or more factors, such as physical conditions, the response of the sample to the test conditions, and the motion of the test element (e.g., interaction with a clamp holding the sample), may cause some or all of the sample to change in the Z-axis.
In some examples, there is a balance between the accuracy and field of view of the video elongation measurement system. Cameras used for video extension measurements have a certain fixed base image resolution. This resolution will in turn affect the accuracy of the two-dimensional measurement calculated from the image. Lenses used in the system may map this basic fixed image resolution onto the field of view of the test space. The use of magnification to increase the size of the field of view comes at the cost of reduced accuracy.
As the test progresses, some samples may change sufficiently (e.g., become larger in one or more dimensions) during the test so that the required field of view at the end of the test to cover the entire sample is much greater than the field of view at the beginning of the test. This is generally the case for so-called "high elongation" materials, but is also applicable for a range of materials.
Some material tests require more accuracy at the beginning of the test than at the end of the test. However, video extensional measurement systems are required to cover a larger field of view of the entire test, which limits the available measurement accuracy, as compared to measurement accuracy when focusing on a smaller field of view at the beginning of the test.
To address these and other sources of error, different system configurations for measuring strain are disclosed, such as using multiple cameras.
In some examples, the multi-camera video extension measurement system 10 employs one or more front cameras and one or more side cameras. In the example of fig. C1, the front side camera 12 is used to measure changes in the reference feature 20 as disclosed herein. The side camera 12A is added to track changes in the Z axis 39 between the camera 12 and the specimen 16.
In this way, the side camera 12A may track movement of the specimen 16 closer to or farther from the front camera 12. The specimen 16 may be tilted during the testing operation such that the first or top and second or bottom portions of the specimen have different Z distances from the front camera 12 (e.g., constant or varying angles during testing), but maintain a linear relationship between the first and second portions.
In some examples, during testing, the shape of the specimen 16 may change on the Z-axis 39 such that the specimen shape may bend inward or outward during testing, and/or may initially bend but the shape straightens during testing.
In some examples, the side measurements are used, by way of non-limiting example, to confirm and/or deny accuracy of the operation before and/or after testing; providing interactive information to an operator to guide installation while loading the sample; after the test is finished, correcting (or compensating) the position data generated by the front camera; correcting (or compensating) position data generated by the front-side camera in real time during a test operation; and/or may be configured with a front or back illumination mechanism for direct or contour image capture. For example, measurements from the front and rear cameras 12, 12A may be provided to the processing circuitry 32 to measure the strain of the specimen 16.
In some examples, as shown in example fig. C2, the video extensional measurement system 10 employs two or more front-side cameras 12, 12B having similar imaging and/or sensing capabilities, each mounted at an angle slightly offset from each other, but with the same field of view (e.g., the field of view of the specimen 16). Although fig. C2 illustrates the use of two cameras, additional cameras (e.g., 3, 4, or more) may be included. For example, each camera may be arranged at a different distance, at a different angle, and/or with different optical characteristics (e.g., focus, magnification, optical power, etc.) relative to the other camera and/or the specimen.
In this example, for example, multiple cameras can collect stereoscopic images for offline three-dimensional (3D) Digital Image Correlation (DIC) image analysis. For example, a first camera may be used to image reference feature locations in the X-Y plane, while images of a second camera (and/or more than two cameras) may be used to establish Z-axis movement as a review. Review may provide feedback to a processing circuit (e.g., processing system 32) and/or an operator, such as correcting and/or compensating for variations in the reference feature position in the X-Y plane.
In some examples, two or more cameras are used simultaneously to collect 3D stereoscopic images for real-time 3D reference feature tracking. For example, the views from each camera may be used to review camera calibration and provide notification to confirm the calibration and/or notify of the required recalibration. As illustrated in example fig. C3, the overlap between the views 95A and 95B (corresponding to the upper portion 97A and the lower portion 97B, respectively) may additionally provide data for algorithmic filtering of noise, thereby improving imaging accuracy.
In some examples, as shown in fig. C6, two front cameras 12, 12B are employed, the first camera 12 having a wide field of view 95C and the second camera 12B having a narrower field of view 95D contained within the field of view of the first camera.
The first wide-field camera is configured to cover the entire range of specimen motion during a test operation, but at a lower resolution relative to the second camera. Thus, the second narrower-view camera provides a relatively high resolution view of the sample gauge length portion at the beginning of the test, a portion of the sample possibly moving out of view of the camera during the test.
Combining the image data from the first and second cameras can optimize image capture as the specimen changes and allow the measurement results to maintain image resolution and displacement accuracy during testing, especially for high elongation specimens.
In addition, comparing overlapping portions of each field of view provides a Z-axis correction. For example, the overlapping portions and associated Z-axis correction provide high resolution imaging of some portions of the Z-axis motion during initial sample loading and testing.
In some examples, as generally shown in the example of fig. C3, two or more front cameras 12, 12B are employed, wherein a first field of view 95A of the first camera 12 covers a first or upper portion 97A of the test space (or specimen) that is slightly greater than 50%, and a second camera 12B has a second field of view 95B that covers a second or lower portion 97B of the test space (or specimen) that is slightly greater than 50%, with a degree of overlap between the first and second fields of view.
In this example, the use of the first and second fields of view almost doubles the distance over which a high elongation specimen can be measured.
Furthermore, as described herein, the overlapping portion of the two views between the two cameras provides a certain amount of stereo vision, facilitating Z-axis measurements and thus correction. The overlapping portions provide information about the Z-axis motion, which allows measurement and/or correction during loading and/or testing of the specimen.
In some examples, a larger elongated displacement sample may be covered by expanding the system to a greater number of cameras (e.g., 3, 4, or more cameras), with a degree of overlap between each adjacent view.
In some disclosed examples, as shown in the example of fig. C2, multiple (e.g., two or more) front-side cameras are employed, wherein one or more of the cameras is configured to adjust its field of view. The adjustment may be automatic and/or commanded by an operator (e.g., in response to sensor measurements) and may be performed in real-time, including during calibration steps and/or during testing.
As a non-limiting example list, the adjustment of the view may include changing one or more of camera magnification, position and/or orientation (e.g., vertical and/or horizontal position) of the camera or lens.
In some examples, each camera may be focused to track a single reference feature (e.g., a marker, a point, etc.). Each reference feature will be imaged at a high resolution (e.g., the maximum resolution provided by a camera or associated optics) to track the moving and/or changing position of the reference feature with high accuracy during testing.
Depending on the sample type and/or material testing requirements, the system may use a single camera configured for (e.g., optical and/or physical) adjustment, while the other camera is fixed (e.g., fixed magnification, position, and/or orientation). In some examples, each camera is configured for adjustment, as disclosed herein. Some example systems may use a single camera configured for adjustment (e.g., magnification, position, and/or orientation) without using a second camera.
In some examples employing two or more front-side cameras, a first camera has a first view covering a first or upper portion of the test space (or specimen) and a second camera has a second view covering a second or lower portion of the test space (or specimen), without any overlap between the first and second views.
In some examples, each of the first and second cameras will focus on a unique first or second reference feature 21A, 21B, respectively. For example, the first reference feature 21A is located at the top portion 97A and the second feature 21B is located at the bottom portion 97B.
Each reference feature may have a particular feature (e.g., size, shape, position, location, etc.) relative to other reference features at the beginning of the calibration phase and/or the testing process (within the constraints of the test space and calibration procedure). This will provide high resolution and accuracy for samples having a relatively large initial gauge length. The measurements from each camera may then be provided to a processor (e.g., processing system 32) for analysis according to a predetermined relationship between the cameras (e.g., arrangement in a test environment).
In some examples, multiple views may be captured by a single camera to provide similar and/or different advantages associated with systems using multiple cameras without the need and/or necessity of using multiple sets of lenses, cameras, and/or image sensors.
In an example, as shown in fig. C4, one or more external mirrors 92 are disposed around the specimen 16 and/or the camera 12. One or more mirrors 92 are arranged such that a single camera 12 may view the sample 12 and mirrors 92 without moving and/or adjusting the focus or position of the camera 12. Thus, the specimen 12 and/or mirror 92 may be observed simultaneously (and/or periodically, alternately, and/or in response to a command).
In some examples, as shown in fig. C5, multiple mirrors 92A and 92B may be used to view the sample 12 from different angles (e.g., sides of the sample, opposite sides of the sample, etc.). The mirror 92 may be designed to provide the same magnification, or a different magnification, as the camera 12 directly views the specimen 16.
In some examples, one or more internal optics 93 (e.g., prisms, mirrors, diffraction gratings, filters, etc.) may be used (e.g., within the camera 12, the test system 33, etc.) to manipulate the received light. Light from multiple lenses or other optics (e.g., mirrors) may be directed onto different portions of a single image sensor 70 rather than using a dedicated image sensor for each lens. In some examples, light received from a single lens 15 may be replicated (e.g., split, reflected, etc.) onto multiple image sensors to improve noise.
In some examples, light from a single lens 15 or camera 12 may be split and redirected onto different portions of the same image sensor and/or onto different image sensors in view of one or more characteristics of the light (e.g., light phase, frequency, etc.) to improve noise and/or provide Z-axis information. For example, by employing a plurality of lenses, a different frequency filter may be used for each lens. This information can then be combined with other optical data, such as spatial frequency separation provided by a prism.
In some examples, a color image sensor may be used in addition to or in place of a monochrome sensor. For example, replacing a monochrome sensor with a color image sensor allows for imaging and processing of a sample using multiple frequencies of light simultaneously. Advantageously, the use of a color image sensor instead of a monochrome sensor simplifies the calibration process, reduces costs, and simplifies the reference feature tracking algorithm. Noise mitigation may be further improved and/or Z-axis information provided using a sensor configured to receive light at multiple frequencies.
In systems using multiple lenses and/or cameras, pairing different frequency filters to each lens can be combined with the color separation provided by the image sensor to further improve image acquisition and accuracy.
In one example, the internal optics include a Liquid Crystal Display (LCD) configured to partially block a portion of the lens and/or a particular frequency of received light. This technique enables 3D image processing using a single lens and/or image sensor.
Noise reduction
In some disclosed example systems, system operation may generate significant heat, which may degrade performance and/or image quality of one or more components. For example, some system components, such as circuitry, image sensors, etc., may be negatively affected when exposed to heat.
In the disclosed example, as shown in fig. D1, the cooling element 96 may be arranged to cool one or more of the Printed Circuit Board (PCB) 94 and/or the image sensor 70. For example, the cooling element 96 may be an active cooler, such as a thermoelectric (e.g., peltier) cooler attached to the PCB 94 on which the image sensor 70 (and/or control circuitry) and/or circuitry associated with the image sensor 70 are mounted in conductive communication. In some examples, the cooling element 96 is connected to an additional or alternative heat sink.
The temperature sensor 98 may be configured to measure the temperature at the image sensor 70 and/or the circuit. The temperature controller circuit 100 and/or the processing system 32 may use the measurements to control one or more systems (e.g., the cooling element 96) to regulate component cooling.
The use of cooling elements 96 facilitates reducing system vibrations as compared to cooling systems that use mechanical blowers. In addition, the cooling effect on the image sensor 70 serves to reduce the lens temperature to or near ambient temperature, thereby reducing the ghost effect that may occur in front of the lens. Furthermore, cooling of the image sensor 70 results in a reduction of dark current and/or associated background noise, for example, when not in use.
In some disclosed examples, system operation may generate significant vibrations, which may degrade performance and/or image quality of one or more components. To mitigate vibration at the image sensor 70, as shown in fig. D2, one or more motion sensors 102 (e.g., one or more of an accelerometer, an inertial measurement unit, a vibration sensor, a tilt sensor, etc.) may be arranged to monitor and measure vibration at and/or near the image sensor 70.
In some examples, the measured data may be provided to the drive and control system 106 and/or the processing system 32 to calculate the compensation factor. For example, the image data may be associated with vibration measurements that violate one or more thresholds. At the relevant data points, a compensation factor may be applied to the image data to correct for excessive vibration (e.g., exceeding a threshold vibration value).
In additional or alternative examples, the drive and control system 106 and/or the processing system 32 may analyze measurements from one or more motion sensors 102. Based on the analysis, the drive and control system 106 and/or the processing system 32 may generate one or more control signals to command adjustments to the position or orientation of the image sensor 70 via the one or more actuators 104. For example, the actuator 104 may comprise a piezoelectric actuator with a mechanical amplifier. In some examples, measurement and compensation calculations may be performed in real-time during imaging operations. In some examples, active vibration control may be implemented with image data compensation.
Advantageously, the actuator 104 is used to actively mitigate the effects of vibrations on the system to mitigate external oscillations on the imaging sensor 70, as oscillations of the imaging sensor 70 may introduce noise.
The present methods and systems may be implemented in hardware, software, and/or a combination of hardware and software. The present methods and/or systems may be implemented in a centralized fashion in at least one computing system, or in a distributed fashion where different elements are spread across several interconnected computing systems. Any kind of computing system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software could include a general purpose computing system with a program or other code that, when being loaded and executed, controls the computing system such that it carries out the methods described herein. Another exemplary embodiment may include an application specific integrated circuit or chip. Some implementations may include a non-transitory machine-readable (e.g., computer-readable) medium (e.g., a flash drive, an optical disk, a magnetic storage disk, etc.) having one or more lines of code stored thereon that are executable by a machine to cause the machine to perform a process described herein. As used herein, the term "non-transitory machine-readable medium" is defined to include all types of machine-readable storage media and to exclude propagating signals.
As used herein, the terms "circuitry" and "circuitry" refer to physical electronics (i.e., hardware) and any software and/or firmware ("code") that may be configured, executed by, or otherwise associated with hardware. As used herein, for example, a particular processor and memory may constitute a first "circuit" when executing a first one or more lines of code and a second "circuit" when executing a second one or more lines of code. As used herein, "and/or" refers to any one or more items in a list connected by "and/or". For example, "x and/or y" means any element in the three-element set { (x), (y), (x, y) }. In other words, "x and/or y" means "one or both of x and y". As another example, "x, y, and/or z" refers to any element in a seven-element set { (x), (y), (z), (x, y), [ x, z ], [ y, z ] }. In other words, "x, y, and/or z" means "one or more of x, y, and z". As used herein, the term "exemplary" refers to serving as a non-limiting example, instance, or illustration. As used herein, the terms "such as" and "for example" are given a list of one or more non-limiting examples, instances, or illustrations. As used herein, a circuit is "operable" when the circuit includes the hardware and code (if necessary) necessary to perform a function, whether the execution of the function is disabled or not enabled (e.g., by user-configurable settings, factory trim, etc.).
While the present method and/or system has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present method and/or system. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the scope thereof. For example, the systems, blocks, and/or other components of the disclosed examples may be combined, partitioned, rearranged, and/or otherwise modified. Therefore, the present methods and/or systems are not limited to the specific embodiments disclosed. On the contrary, the present method and/or system is intended to cover all implementations that fall within the scope of the appended claims, either literally or under the doctrine of equivalents.

Claims (20)

1. A system for correcting errors in a test procedure in a video extensional measurement system, the system comprising:
a test system for securing a sample;
an imaging device arranged to capture an image of a surface of the sample;
one or more sensors configured to measure one or more parameters associated with the sample; and
A processing system for:
receiving image data from the imaging device;
receiving sensor data from the one or more sensors;
comparing the image data or the sensor data to one or more data thresholds;
calculating a correction factor based in part on the image data and the sensor data in response to the image data or the sensor data violating the one or more data thresholds; and is also provided with
An adjustment of the imaging device and system components is commanded based at least in part on the correction factor.
2. The system of claim 1, wherein the system component is an active cooler, an actuator, or an imaging device positioning system.
3. The system of claim 1, wherein the imaging device is a single vision camera.
4. The system of claim 1, wherein the imaging device is two or more cameras.
5. The system of claim 1, wherein the sensor is an accelerometer, an inertial measurement unit, a temperature sensor, an infrared sensor, a light emitting diode sensor, an ultrasonic sensor, or a laser sensor.
6. The system of claim 1, wherein the one or more parameters include one or more of a shape or position of a marker, an edge position of a sample, or a width of a sample.
7. The system of claim 1, wherein the correction factor is in units of one of millimeters, inches, or pixel units.
8. The system of claim 1, wherein the processing system is located with a remote computing platform that communicates with one or more of the test system or the imaging device.
9. The system of claim 1, wherein the processing system is integrated with one of the imaging device or the testing system.
10. A system for correcting errors in a test procedure in a video extensional measurement system, the system comprising:
a test system for securing a sample;
an imaging device arranged to capture an image of a surface of the sample;
one or more motion sensors configured to measure one or more motion parameters associated with the video extension measurement system;
one or more actuators for adjusting a position or orientation of the imaging device; and
a processing system for:
receiving image data from the imaging device;
Receiving sensor data corresponding to vibration measurements from the one or more motion sensors;
comparing the image data or the sensor data to one or more data thresholds;
responsive to the sensor data violating the one or more data thresholds, calculating a correction factor based in part on the image data and the sensor data; and
the one or more actuators are commanded to adjust a position or orientation of the imaging device based at least in part on the correction factor.
11. The system of claim 10, wherein the one or more motion sensors comprise an accelerometer, an inertial measurement unit, a vibration sensor, or a tilt sensor.
12. The system of claim 10, wherein the one or more motion sensors are disposed proximate to the one or more image sensors to monitor and measure vibrations at the one or more image sensors.
13. The system of claim 10, wherein the one or more data thresholds correspond to one or more data threshold vibration values, the processing system further operable to associate image data with vibration measurements that violate the one or more threshold vibration values.
14. The system of claim 13, wherein the processing system is further operable to apply the compensation factor to the image data at the relevant data points exceeding the one or more threshold vibration values to correct for excessive vibration.
15. The system of claim 10, further comprising a drive and control system to receive instructions from the processing system to control the one or more actuators.
16. The system of claim 10, wherein the actuator 104 may comprise a piezoelectric actuator having a mechanical amplifier.
17. The system of claim 10, wherein the measurement and compensation calculations may be performed in real-time during an imaging operation.
18. The system of claim 10, wherein active vibration control may be implemented with image data compensation.
19. The system of claim 10, wherein the processing system is located with a remote computing platform in communication with one or more of the testing system or the imaging device.
20. The system of claim 10, wherein the imaging device comprises two or more imaging devices, each imaging device operable to capture an image of the surface of the specimen.
CN202280048408.9A 2021-05-27 2022-05-10 System and method for error correction of video extensometer Pending CN117616247A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/193,984 2021-05-27
US17/739,243 2022-05-09
US17/739,243 US20220381660A1 (en) 2021-05-27 2022-05-09 Systems and methods for error correction for video extensometers
PCT/US2022/028440 WO2022250947A1 (en) 2021-05-27 2022-05-10 Systems and methods for error correction for video extensometers

Publications (1)

Publication Number Publication Date
CN117616247A true CN117616247A (en) 2024-02-27

Family

ID=89944762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280048408.9A Pending CN117616247A (en) 2021-05-27 2022-05-10 System and method for error correction of video extensometer

Country Status (1)

Country Link
CN (1) CN117616247A (en)

Similar Documents

Publication Publication Date Title
JP6643072B2 (en) Microscope system and control method thereof
JP5589823B2 (en) Stereo camera calibration apparatus and calibration method
US20130147939A1 (en) Image acquisition apparatus and method for adjusting image acquisition apparatus
US20240142355A1 (en) Thickness correction for video extensometer systems and methods
JP6659133B2 (en) Slides and sets of slides and coverslips, and microscope systems
RU2769373C1 (en) Method of measuring geometrical discrepancies between curved surfaces of a plurality of analyzed materials and a curved surface of a reference material
US20240193745A1 (en) Brightness and contrast correction for video extensometer systems and methods
US11725932B2 (en) Video extensometer system with reflective back screen
JP6609174B2 (en) Microscope system and control method thereof
JP6643071B2 (en) Microscope system
CN117616247A (en) System and method for error correction of video extensometer
US20220381660A1 (en) Systems and methods for error correction for video extensometers
WO2022250947A1 (en) Systems and methods for error correction for video extensometers
JP6198393B2 (en) Contact type three-dimensional shape measuring apparatus and probe control method
JP6884077B2 (en) Surface inspection equipment and surface inspection method
EP2641694B1 (en) Actuating apparatus, actuating system and method for actuating a working stage to move relative to a platform with high-precision positioning capability
WO2022244074A1 (en) Substrate measurement device and substrate measurement method
WO2023137186A1 (en) Systems and methods to calibrate optical extensometers
Kaušinis et al. Dynamic mode of line scale calibration
CN117824492A (en) Detection method, detection device, terminal equipment and computer readable storage medium
JP2002156223A (en) Method for autonomously determining system error in surface shape measuring system using specimen for calibration
JP2001208532A (en) Method of measuring contact probe of three-dimensional shape measuring apparatus
JP2014238466A (en) Image acquisition device and image acquisition method
JP2002243422A (en) Aligning method for spatial filter, alignment sensor for spatial filter, alignment device for spatial filter, interference measuring instrument, and projection lens
JP2014115331A (en) Control method for imaging device, and imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination