WO2023123003A1 - 机器视觉检测方法、其检测装置及其检测系统 - Google Patents
机器视觉检测方法、其检测装置及其检测系统 Download PDFInfo
- Publication number
- WO2023123003A1 WO2023123003A1 PCT/CN2021/142250 CN2021142250W WO2023123003A1 WO 2023123003 A1 WO2023123003 A1 WO 2023123003A1 CN 2021142250 W CN2021142250 W CN 2021142250W WO 2023123003 A1 WO2023123003 A1 WO 2023123003A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- boundary
- component
- straight line
- point
- line
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 96
- 238000000034 method Methods 0.000 claims abstract description 77
- 238000003466 welding Methods 0.000 claims abstract description 57
- 229910000679 solder Inorganic materials 0.000 claims description 80
- 230000008569 process Effects 0.000 claims description 24
- 238000004364 calculation method Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 12
- 238000003860 storage Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 230000007246 mechanism Effects 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000011179 visual inspection Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 abstract description 27
- 238000005259 measurement Methods 0.000 abstract description 6
- 238000007689 inspection Methods 0.000 description 27
- 229910052782 aluminium Inorganic materials 0.000 description 26
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 23
- 238000010586 diagram Methods 0.000 description 13
- 238000000605 extraction Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 9
- 238000005476 soldering Methods 0.000 description 9
- 230000008901 benefit Effects 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 6
- 238000012935 Averaging Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- AZDRQVAHHNSJOQ-UHFFFAOYSA-N alumane Chemical compound [AlH3] AZDRQVAHHNSJOQ-UHFFFAOYSA-N 0.000 description 3
- 230000004069 differentiation Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B5/00—Measuring arrangements characterised by the use of mechanical techniques
- G01B5/0037—Measuring of dimensions of welds
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/14—Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
- G01B11/03—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring coordinates of points
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2513—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2518—Projection by scanning of the object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Definitions
- the present application relates to the field of machine vision, in particular to a machine vision detection method, a detection device and a detection system thereof.
- Visual inspection is mainly based on naked eye observation, combined with the use of auxiliary tools such as magnifying glass, measuring tools and templates to conduct a comprehensive inspection of the weld surface quality and visual dimensions.
- the present application provides a machine vision detection method, a detection device and a detection system thereof, which can improve the efficiency and accuracy of machine vision detection.
- the present application provides a machine vision inspection method.
- the method includes: receiving a three-dimensional image from a line laser, the three-dimensional image comprising: at least a portion of a boundary of a first part, at least a portion of a boundary of a second part, and at least one of the boundaries of the first part and the boundary of the second part Welding points; convert the three-dimensional image into a two-dimensional grayscale image; obtain the boundary of the first component and the boundary of the second component in the two-dimensional grayscale image; determine N lines between the boundary of the first component and the boundary of the second component Vertical line: Calculate the average value of the lengths of N vertical lines as the gap between the first component and the second component, where N is a positive integer.
- the three-dimensional image collected by the line laser can support continuous sampling of the detection component, and the camera does not need to be calibrated in advance.
- the gap between two parts can be calculated more accurately, which improves the detection accuracy.
- the above-mentioned step of acquiring the boundary of the first part and the boundary of the second part in the two-dimensional grayscale image specifically includes: setting N equally divided boundaries in the boundary area containing the boundary of the first part The first fitting unit of the area; sequentially connect the intersection points between each first fitting unit and the boundary of the first part to form the first straight line fitted with the boundary of the first part; in the boundary area containing the boundary of the second part Among them, N second fitting units of equally divided boundary regions are set; the intersection points between each second fitting unit and the boundary of the second part are connected in turn to form a second straight line fitted with the boundary of the second part; wherein, N is a positive integer between 20-50.
- N is a positive integer between 20-50.
- the step of generating N perpendicular lines between the boundary of the first component and the boundary of the second component specifically includes: determining the intersection points between the N first fitting units and the boundary of the first component as Vertical line setting points; each vertical line setting point is used as a starting point to generate N vertical lines perpendicular to the second straight line; or determine the intersection between N second fitting units and the boundary of the second component as the vertical line Line setting points; each vertical line setting point is used as a starting point to generate N vertical lines perpendicular to the first straight line.
- the vertical lines are generated based on the intersection points between the fitting unit and the boundary of the component, so that the N vertical lines can be evenly distributed along the gap, so that the gap between the two components can be calculated more accurately.
- the above method further includes: extracting a third straight line and a fourth straight line respectively fitted to the borders on both sides of the solder joint in the two-dimensional grayscale image; The tangent point and the second tangent point where the fourth straight line is tangent to the solder point; calculate the distance between the first tangent point and the second tangent point as the width of the solder point.
- a method of calculating the width of the welding spot between two components based on the three-dimensional image is also provided. It can be automatically calculated by obtaining a fitting line fitted to the boundary of the solder joint to help more comprehensively evaluate the welding status between components.
- the above-mentioned step of extracting the third straight line and the fourth straight line respectively fitted to the two side boundaries of the solder joint in the two-dimensional grayscale image specifically includes: Among them, the third fitting units of M equally divided boundary regions are respectively set; the intersection points between each third fitting unit and the first side boundary of the solder joint are connected in turn to form the third straight line, and; In the boundary area of the second side boundary of the solder joint, M fourth fitting units of equally divided boundary areas are set respectively; the intersection points between each fourth fitting unit and the second side boundary of the solder joint are connected in turn to form The fourth straight line; wherein, M is a positive integer between 30-50.
- the above-mentioned step of determining the first tangent point of the third line tangent to the welding spot and the second tangent point of the fourth line tangent to the welding spot specifically includes: The intersection point between the fitting unit and the first side boundary of the welding point is taken as the first tangent point; and the intersection point of the last fourth fitting unit on the fourth straight line and the second side boundary of the welding spot is taken as the second tangent point.
- the first tangent point and the second tangent point can be approximately determined simply and quickly by fitting the last fitting unit of the straight line.
- the above step of determining the first point of tangency between the third line and the welding point and the second point of tangency between the fourth line and the welding point specifically includes: The intersection point between is taken as the first tangent point; and the intersection point between the fourth straight line and the first straight line is taken as the second tangent point.
- the width of the solder joint can basically be considered as the width of the gap between the two components
- the intersection point between the fitting straight line based on the component and the fitting on both sides of the solder joint is provided to determine Quickly determine the two tangent points used to calculate the solder joint width, which can help to obtain accurate solder joint width calculation results.
- the present application provides a machine vision detection device.
- the machine vision detection device includes: a receiving module, configured to receive a three-dimensional image from a line laser, the three-dimensional image includes: at least a part of the boundary of the first part, at least a part of the boundary of the second part and at least one part of the boundary of the first part and the welding spot on the boundary of the second part; the conversion module is used to convert the three-dimensional image into a two-dimensional grayscale image; the fitting module is used to obtain the boundary of the first part and the relationship with the second part in the two-dimensional grayscale image The boundary; the gap calculation module is used to determine N vertical lines between the first component boundary and the second component boundary; and calculate the average value of the N vertical line lengths as the gap between the first component and the second component , N is a positive integer.
- the three-dimensional image collected by the line laser can support continuous sampling of the detection component, and the camera does not need to be calibrated in advance.
- the device can calculate the real gap between two components by averaging multiple vertical lines, effectively eliminating the interference caused by image distortion and the like. As a result, the detection speed and detection accuracy are effectively improved.
- the present application provides an electronic device.
- the electronic device includes: a processor and a processor communicated with the processor; the memory stores computer program instructions, and when the computer program instructions are invoked by the processor, the processor executes the above visual detection method.
- the three-dimensional image collected by the electronic device through the line laser can support continuous sampling of the detection component, and the camera does not need to be calibrated in advance.
- the real gap between two components can be calculated by averaging multiple vertical lines, which effectively eliminates the interference caused by image distortion and the like. As a result, the detection speed and detection accuracy are effectively improved.
- the present application provides a non-volatile computer storage medium.
- the non-volatile computer storage medium stores computer program instructions, so that when the computer program instructions are invoked by the processor, the above visual detection method is executed.
- the three-dimensional image collected by the line laser can support continuous sampling of the detection component, and the camera does not need to be calibrated in advance.
- the real gap between two components can be calculated by averaging multiple vertical lines, which effectively eliminates the interference caused by image distortion and the like. As a result, the detection speed and detection accuracy are effectively improved.
- the present application provides a machine vision inspection system.
- the machine vision inspection system includes: an image acquisition device including several line lasers for acquiring three-dimensional images; a drive mechanism for causing relative movement between the image acquisition device and the component to be tested; a first communication connection with the image acquisition device A controller, the first controller is used to execute the above machine vision detection method to process the three-dimensional image, so that the processing result of the three-dimensional image can be used for detection of the component to be tested.
- continuous sampling of the detection component can be supported, and the camera does not need to be calibrated in advance, which can effectively improve the detection accuracy and detection efficiency.
- the line laser can intuitively display the characteristics of the detection area, which can significantly improve the detection effect of the sampled image, and can greatly improve the detection accuracy.
- the image acquisition device includes: two line lasers, a sensor bracket and a hood; the two line lasers are respectively arranged on both sides of the sensor bracket; the light hood is fixed on the sensor bracket, and the cover covers the outside of the line laser; the sensor
- the bracket includes: a height adjustment module and a distance adjustment module; the height adjustment module is used to adjust the height of the line laser; the distance adjustment module is used to adjust the distance between two line lasers.
- an additional height adjustment module and a spacing adjustment module are also set up, so that the machine vision inspection system can be adaptively adjusted when the size of the component to be tested changes, and meets various requirements.
- the additionally provided hood can prevent the laser light of the line laser from scattering to the outside, so as to prevent the eyes of the operator from being hurt.
- the machine vision detection system further includes: a second controller, the second controller is used to control the height adjustment module and the distance adjustment module, so that the two line lasers reach the target distance and/or the target height;
- the second controller stores several pieces of configuration information recording target distances and target heights; each configuration information corresponds to at least one component to be tested.
- the configuration information of the components under test of different models, sizes or shapes can be pre-recorded in the second controller. Therefore, when the size, model or shape of the component to be tested changes, technicians can realize automatic switching and quick adjustment by loading the corresponding configuration information, which effectively improves the compatibility and operating efficiency of the detection system.
- Fig. 1 is the structural representation of the machine vision detection system of some embodiments of the present application.
- FIG. 2 is a schematic structural diagram of an image acquisition device in some embodiments of the present application.
- FIG. 3 is a schematic structural diagram of a machine vision inspection system according to other embodiments of the present application.
- Fig. 4 is the method flowchart of the machine vision detection method of some embodiments of the present application.
- FIG. 5 is a schematic diagram of a two-dimensional grayscale image of some embodiments of the present application.
- Fig. 6 is a method flowchart of step S403 in some embodiments of the present application.
- Fig. 7a is a schematic diagram of acquiring component boundaries in some embodiments of the present application, showing a fitting unit for equally dividing the boundary area;
- Figure 7b is a schematic diagram of the boundary of some embodiments of the present application, showing the display form of the component boundary obtained in Figure 7a;
- Fig. 8 is a method flowchart of a machine vision detection method according to another embodiment of the present application.
- Fig. 9 is a method flowchart of step S801 in some embodiments.
- Fig. 10 is a method flow chart of the machine vision inspection method in some embodiments of the present application, showing that the parts to be tested are the cell top cover and the cell aluminum shell after the pre-welding process is completed;
- Fig. 11 is a schematic diagram of the parts to be tested in some embodiments of the present application, showing the top cover of the cell and the aluminum case of the cell tested in Fig. 9;
- Fig. 12 is a schematic diagram of a machine vision inspection device according to some embodiments of the present application.
- Fig. 13 is a schematic diagram of a machine vision detection device according to another embodiment of the present application.
- Fig. 14 is a schematic diagram of an electronic device according to some embodiments of the present application.
- multiple refers to more than two (including two), similarly, “multiple groups” refers to more than two groups (including two), and “multiple pieces” refers to More than two pieces (including two pieces).
- the top cover of the cell and the aluminum shell need to be welded.
- a pre-welding process for preliminary welding of the top cover and the aluminum case to realize the initial positioning between the two.
- a typical machine vision inspection method is to use a two-dimensional camera to collect image information of the welding part of the top cover and the aluminum shell after the pre-welding process, and then process and analyze the image information to determine whether the gap between the two meets the requirements.
- the applicant provides a component gap detection method based on line lasers, which can effectively solve the problem that traditional two-dimensional cameras need to be calibrated in advance, and there are many interferences in collecting image information The defects that make the detection accuracy and detection efficiency low.
- the components to be tested in the embodiments of the present application are the top cover and the aluminum shell after the pre-welding process is taken as an example.
- the machine vision inspection system of the embodiment of the present application can also be applied to other components to be tested with similar structural shape characteristics for inspection.
- FIG. 1 is a schematic structural diagram of a machine vision inspection system according to some embodiments of the present application.
- the machine vision inspection system includes: an image acquisition device 110 , a driving mechanism 120 and a first controller 130 .
- the image acquisition device 110 is a device for acquiring three-dimensional image signals. Specifically, any suitable type and quantity of line lasers can be selected and used, and a support structure suitable for the line lasers can be used.
- the driving mechanism 120 is an action unit for driving relative movement between the component to be tested (for example, the cell top cover and the cell aluminum case after the pre-soldering process) and the image acquisition device 110 .
- the component to be tested can be clamped and fixed on the driving mechanism 120 , driven by the driving mechanism 120 , it moves relative to the line laser of the image acquisition device 110 , so as to complete the image sampling of the component to be tested.
- the first controller 130 may be an electronic computing device with a logic operation function, including but not limited to a server or an industrial computer. It can establish a communication connection with the image acquisition device in a wired or wireless manner, so as to receive the three-dimensional image signal acquired by the image acquisition device.
- the cell top cover A1 and the cell aluminum case A2 can be driven by a motor or other suitable type of driving mechanism 120, and the image acquisition device 110 can be connected at a set speed. relatively mobile.
- the line laser of the image acquisition device 110 can continuously acquire the image composed of the pre-welded cell top cover A1 and the cell aluminum shell A2 through an encoder and other similar sensor devices according to the acquisition frequency adapted to the relative moving speed. Three-dimensional image signal of the part under test on both long sides.
- the 3D image signal acquired by the image acquisition device 110 is provided to the first controller 130, and after a series of machine vision detection method steps such as image processing are performed by the first controller 130, the detection result is output and provided to an external device.
- a series of machine vision detection method steps such as image processing are performed by the first controller 130
- the detection result is output and provided to an external device.
- the image acquisition device can acquire the three-dimensional image signals of the component to be tested in a sampling and continuous acquisition manner.
- the continuous scanning method of the line laser can effectively reduce the frequency of start and stop actions, thus greatly improving the detection speed.
- the image acquisition device 110 may include: two line lasers 111 , a sensor bracket 112 , a height adjustment module 113 , a spacing adjustment module 114 and a light shield 115 .
- two line lasers 111 are arranged on both sides of the sensor bracket 112 respectively, and can be used to simultaneously collect three-dimensional image signals of two symmetrical long sides of the cell top cover and the cell aluminum shell.
- the line laser can have a suitable field of view and pixel precision according to the needs of the actual situation.
- the resolution of the line laser 111 in the scanning direction can be set to be smaller than the gap detection threshold (eg, 0.08 mm) to meet the detection requirements.
- the scanning line speed is set above 130mm/s, and the scanning frequency is around 5kHz.
- Both the height adjustment module 113 and the distance adjustment module 114 are arranged on the sensor bracket 112 .
- it can be implemented using any suitable type of mechanical structure, including but not limited to based on screws, cylinders, or gears.
- the height adjustment module 113 and the distance adjustment module 114 can be changed within a certain range to meet the detection requirements of different types or sizes of batteries.
- the light shield 115 may be disposed on the sensor bracket 112 so as to cover the line laser 111 therein.
- a cover of any suitable shape, size or material can be used, as long as the cover can cover the line laser 111 .
- Such a design can avoid the leakage or reflection of the laser light generated by the line laser to the eyes of the operator, and achieve the effect of protecting the human body.
- the data acquisition of the component to be tested is completed by the line laser, which can effectively avoid the unclear image acquired by the traditional camera, the image is distorted, or the gap between the image and the actual value is biased due to the setting of the light source. Minor flaws.
- the line laser provides a three-dimensional image signal, which can realize multi-angle and multi-direction measurement, avoid measurement misjudgment caused by blind areas blocked by vision, and can also provide more accurate and intuitive image information.
- FIG. 3 is a schematic structural diagram of a machine vision inspection system provided by other embodiments of the present application.
- the machine vision inspection system may also include a second controller 140 .
- the second controller 140 stores several pieces of configuration information recording target distances and target heights.
- the configuration information is data information corresponding to the component to be tested, and can be preset by technicians according to actual product production conditions.
- the second controller when the parts to be tested entering the machine vision inspection system change, technicians or operators can choose to determine the configuration information corresponding to the parts to be tested that need to be tested at present, and then the second controller according to the selected configuration information, automatically control the height adjustment module 113 and the spacing adjustment module 114 to move the line laser to the target spacing and target height recorded in the configuration information, so as to complete the three-dimensional image signal acquisition of the component to be tested.
- first controller and “second controller” according to different functions to be performed by the controllers.
- first controller and “second controller” according to different functions to be performed by the controllers.
- the description of the first controller and the second controller is not intended to limit the specific implementation of the controllers, which may be different functional modules in the same electronic computing device, or may be separate Functional modules arranged in different electronic computing devices.
- One of the advantages of the embodiment of the present application is: through the pre-stored configuration information, when the size or type of the part to be tested changes (for example, when the size of the battery cell to be detected changes), the operator can simply and quickly adjust the machine vision The detection system is adjusted so that it can adapt to the changed parts to be tested, which effectively improves the detection efficiency and compatibility.
- FIG. 4 is a method flowchart of a machine vision inspection method in some embodiments of the present application.
- the machine vision detection method can be executed by the above-mentioned first controller.
- the machine vision inspection method includes:
- S401 Receive a three-dimensional image from a line laser.
- the three-dimensional image is an image signal containing depth information collected and obtained by the line laser moving relative to the component to be measured.
- the three-dimensional image includes: at least a part of the first component, at least a part of the second component, and at least one welding spot between the first component and the second component.
- the above three-dimensional image is specifically determined by the shooting area of the line laser.
- the line laser can also photograph and collect all the parts to be tested, as long as it can include the solder joints formed after the pre-soldering process, and can meet the needs of the inspection, and there is no limitation here.
- the three-dimensional image collected by the line laser may be a color image marked with depth information.
- an appropriate type of pixel conversion method can be used to convert it into a corresponding grayscale image.
- a part of the border of the first part 510 (such as the aluminum case of the electric core) may be included in the two-dimensional grayscale image
- a part of the border of the second part 520 and a part of the border between the first part 510 and the second part may be included.
- Solder spot 530 on the border of 520 The welding spot 530 is located on the first component 510 and the second component 520 at the same time, so as to fix the first component 510 and the second component 520 .
- obtaining refers to distinguishing the boundaries of the first part and the second part from other parts in the two-dimensional grayscale image, and identifying them in any suitable form.
- boundary extraction methods such as edge detection algorithms based on autocorrelation functions, edge detection algorithms based on gray level co-occurrence matrix, or boundary fitting methods based on differential thinking.
- N is a positive integer, indicating the number of vertical lines that need to be set. It can be set by technicians according to actual needs, for example, 20 to 50.
- a "perpendicular line” is a line segment between the boundary of the first part and the second part, perpendicular to the first straight line or the second straight line. Those skilled in the art can understand that each vertical line represents the gap between the first component and the second component at the position of the vertical line.
- the length of each vertical line represents the gap between the first component and the second component at the position of the vertical line, and the overall situation of the gap between the two components can be obtained by taking the average value of these vertical lines, which is used to help judge the expected Whether the gap between the first part and the second part after the welding process meets the quality requirements for subsequent laser welding.
- One of the advantages of embodiments of the present application is that no pre-calibrated data need to be utilized when calculating the gap between two components. Moreover, taking the average value of the lengths of multiple vertical lines as the result of gap detection between two components can eliminate interference well and improve detection accuracy.
- FIG. 6 is a flowchart of a method for acquiring a first component boundary and a second component boundary in a two-dimensional grayscale image according to an embodiment of the present application.
- the step S403 of obtaining the boundary of the first component and the boundary of the second component specifically includes:
- the "boundary area” is an image area including the boundary of the first component. It is a preliminarily demarcated image area, which can be obtained by dividing some marks in the image. For example, the gap between the first part and the second part in a simple two-dimensional grayscale image can be used as a boundary for division.
- the "first fitting unit” is the sampling window used for fitting, which represents the step size in the fitting process. It can be understood that, for the same first component boundary, the more first fitting units are set, the smaller the length of each first fitting unit is, and the higher the fitting degree is, and vice versa.
- the first fitting unit is used as a sampling window, which may be a rectangular frame with a certain width in the boundary area.
- the boundary of the first component extending to the entire boundary area will sequentially pass through N first fitting units to form an intersection with the first fitting unit.
- a boundary 711 is included in the boundary area 710 .
- each fitting unit 720 image processing analysis is performed on each fitting unit 720 in turn, and the intersection point 730 between the boundary 711 of the first part and the fitting unit 720 can be found. It can be understood that the shorter the length of the fitting unit, the closer the line segment formed by the connecting line between adjacent intersection points 730 and the first component boundary 711 is to the line segment belonging to the fitting unit. Correspondingly, the degree of fit is higher.
- fitting unit 720 when presenting the final fitting result, only the formed intersection point 730 and the line segment 740 connecting the two intersection points 730 may be displayed, in a form similar to the line segment connection shown in FIG. 7 b , which is similar to a caliper.
- fitting unit 720 may also be referred to as a "caliper" in some embodiments.
- the method of obtaining the boundary of the second component is the same as the method of obtaining the boundary of the first component in the above steps S4031 and S4032, for details, reference may be made to the fitting process shown in Fig. 7a and Fig. 7b.
- the use of "first" and “second” is only used to distinguish the sampling windows provided on the first component and the second component, and is not used to specifically limit the sampling windows.
- the method of obtaining the second straight line is similar to the method of obtaining the first straight line, and also the intersection points formed by the fitting units are sequentially connected.
- N can be set as a positive integer between 20-50.
- Such a numerical range can also balance the required calculation amount under the condition that the normal detection accuracy requirement is met.
- Such a fitting straight line generation method can conveniently obtain the required fitting straight line by adjusting the number of sampling units (such as the number of calipers), so as to meet the detection requirement for the gap between the first part and the second part.
- the step of generating N vertical lines may specifically include:
- each vertical line setting point is used as a starting point to generate N vertical lines perpendicular to the second straight line.
- setting N fitting units generally forms N intersection points 730 .
- the vertical line setting point ie, the intersection point 730
- N vertical lines can be formed.
- the caliper or sampling unit used when generating the second straight line is used to set the vertical line accordingly, so that the vertical line can also achieve an even distribution.
- the spacing between the two vertical lines can have a suitable distance.
- the vertical line can also be set based on the sampling unit used when generating the second straight line, That is, determine the intersection points between the N second fitting units and the boundary of the second component as the vertical line setting points; respectively use each vertical line setting point as a starting point to generate N vertical lines perpendicular to the first straight line.
- the vertical lines thus arranged have the same number as the sampling units. Multiple vertical lines can be evenly distributed in the component gap of the two-dimensional image, so that the calculation result of the average value of the vertical line length is closer to the real gap between two components.
- FIG. 8 is a machine vision detection method provided by another embodiment of the present application. Please refer to FIG. 8.
- the detection method also The following steps can be performed on the basis of a two-dimensional grayscale image:
- the "soldering point" may be a welding position used to realize the initial positioning between the first component and the second component after the pre-soldering process is completed.
- solder joints 530 are located on the first component 510 and the second component 520 as shown in FIG. 5 .
- the welding site may appear in any type of shape and area in the two-dimensional grayscale image based on the actual welding situation.
- tangent point refers to the tangent point between the fitted boundary line and the area occupied by the solder point in the two-dimensional grayscale image, and also indicates the intersection point of the solder point between the boundary on this side and the gap. Location.
- first tangent point and second tangent point are used to distinguish the tangent points located on both sides of the welding point, and are not used to limit the positions of the two tangent points.
- the width of the solder joint can be regarded as the width of the solder joint above the gap of the two components in the direction of gap extension, which can be represented by the distance between the two tangent points located at the farthest ends. Therefore, the distance between the two tangent points can be calculated as the width of the welding spot to help judge the welding quality of the pre-welding process.
- One of the advantages of the embodiment of the present application is: on the basis of the two-dimensional grayscale image, a detection method for the width of the solder joint is further provided, which can better ensure the accuracy of the detection result of the pre-soldering process and avoid full welding bad.
- the step S801 of extracting the third straight line and the fourth straight line may specifically include the following steps:
- M is an empirical value, which can be set by technicians according to the needs of the actual situation. More fitting units may have a smoother fitting line, and fewer fitting units may have less calculation. quantity.
- M can be set as a positive integer between 30-50.
- Such a numerical range can also balance the required calculation amount under the condition that the normal detection accuracy requirement is met.
- the method for obtaining the boundaries on both sides of the solder joint is the same as the method for obtaining the boundary of the first component and the boundary of the second component described above.
- the specific implementation process can be referred to as shown in FIG. 7a and FIG. 7b , and will not be repeated here.
- the following steps can be adopted to help determine the first tangent point and the second tangent point:
- the intersection point between the third fitting unit and the boundary on one side of the solder joint is used as the first tangent point.
- the intersection point between the last fitting unit and the boundary among the fitting lines can be selected as the tangent point for calculating the width of the solder joint.
- the intersection point of the last fitting unit can basically be considered to be located at the extreme end of a segment boundary to be obtained. Therefore, the position of the last intersection point on the third straight line and the fourth straight line is basically the junction of the solder joint and the gap, and then determined as the first tangent point and the second tangent point.
- the fitted straight line is obtained based on the method shown in FIG.
- the intersection point between the first straight lines is taken as the first tangent point.
- the position where the first straight line intersects with the fitted straight lines on both sides of the solder joint can be used as the tangent point for calculating the solder joint width.
- the above method can also determine the junction position between the welding spot and the gap, and then determine it as the first tangent point and the second tangent point.
- FIG. 10 is a method flow chart of a method for detecting the gap between pre-welded components and the width of solder joints provided by the embodiment of the present application.
- Fig. 11 is a schematic diagram of the battery cell after the pre-soldering process provided by the embodiment of the present application.
- the steps of the component gap and solder joint width detection method include:
- the single cell flows to the sampling area where the image acquisition device is located along with the fixture.
- the component to be tested after the pre-soldering process is mainly composed of an aluminum cell shell 910 and a cell top cover 920 .
- the aluminum cell shell 910 is rectangular and symmetrical on both sides.
- the cell top cover 920 is surrounded by the cell aluminum case 910 and has an outline close to that of the cell aluminum case.
- a certain gap 930 exists between the two. Overlying the gap 930 is a plurality of solder joints 940 .
- the controller After the cell to be detected enters the detection starting position, the controller sends a scanning signal to the image acquisition device.
- the controller may specifically use any suitable type of sensor (such as an infrared sensor) to determine whether the battery cell has entered the detection starting position.
- the controller may be a Programmable Logic Controller (PLC) or any other suitable type of electronic processing device.
- PLC Programmable Logic Controller
- the driving mechanism drives the cell to move relative to the image acquisition device at a set speed
- the image acquisition device that receives the scanning signal scans according to the output frequency of the encoder to acquire a three-dimensional image signal.
- the encoder is a component that feeds back the relative transfer speed of the cell to be tested. Therefore, according to the output frequency of the encoder, the line laser can use a scanning frequency adapted to the relative moving speed of the battery to scan to obtain a three-dimensional image signal of the battery.
- the line lasers can be arranged in pairs to form the two symmetrical long sides of the aluminum shell 910 and the top cover 920 of the battery as shown in the dotted box 950 in the figure. shooting area. In the photographing area, there are a plurality of welding spots 940 for realizing the initial positioning of the aluminum cell shell 910 and the cell top cover 920 .
- the controller receives the three-dimensional image signal acquired by the image acquisition device and generates a corresponding two-dimensional grayscale image.
- the processing operation on the three-dimensional image in the above step S904 can be executed by calling one or more algorithms in the corresponding image software system.
- a coordinate system can be established according to the positional relationship between the long side and the short side of the aluminum battery case, so as to facilitate subsequent calculation and operation.
- the long side and short side of the battery aluminum shell can be obtained, and then the intersection point between the long side and the short side is used as the positioning point of the coordinate system, and the rotation angle of the long side and the short side relative to the coordinate system is used as the reference angle, so that Establish a coordinate system in which the y-axis of the coordinate system is parallel to the long side and the x-axis of the coordinate system is parallel to the short side.
- the controller extracts, from the preprocessed two-dimensional grayscale image, a first straight line fitted to the boundary of the aluminum case of the battery cell and a second straight line fitted to the boundary of the top cover of the second battery cell.
- the controller can be deployed in the production line or testing site, any suitable type of computing equipment with logic computing capabilities. It runs corresponding image processing software to realize a series of image processing operations on two-dimensional grayscale images.
- the number of vertical lines to be calculated may be determined by the number of calipers selected to be used when generating the fitted straight line.
- the intersection point of each sampling unit (ie, the caliper) and the boundary of the part is used as the starting point of the vertical line, and the distance to the fitting line on the other side is calculated.
- the gap threshold can be set according to actual needs, for example, it is set to 0.08mm.
- the controller extracts a third straight line and a fourth straight line fitting with boundaries on both sides of the solder joints from the two-dimensional grayscale image.
- a similar edge extraction algorithm can also be used to obtain a straight line fitted to the boundaries on both sides of the solder joint.
- the two sides of the solder joint refer to the two sides through which the extending direction of the gap passes.
- the first tangent point and the second tangent point at the farthest end can be found in various ways to calculate the width of the welding spot. Similar to the gap between the above two components, the width of the welding spot usually needs to be within a certain range to avoid poor welding. In some embodiments, a standard range of weld joint widths may be 3-5mm.
- the points where the last sampling unit (ie, the caliper) of the third straight line and the fourth straight line intersect with the edge of the welding spot can be used as the two tangent points.
- two intersection points where the third straight line and the fourth straight line intersect the first straight line may also be used as tangent points respectively.
- the detection result refers to data information such as calculated component gaps and/or solder joint widths. It can be fed back to the manufacturing execution system, and displayed in any suitable form on a display device such as a display to show the operator in real time.
- One of the advantages of the embodiments of the present application is that continuous sampling can be supported without stopping at each solder joint position, which improves the detection speed. Moreover, when detecting the components to be tested, the real component gap and solder joint width can be detected in the two-dimensional grayscale image, which is not easily affected by external light sources, etc., and the detection accuracy has been effectively improved.
- FIG. 12 is a machine vision detection device according to an embodiment of the present application.
- the machine vision detection device 1100 includes: a receiving module 1110 , a conversion module 1120 , a fitting module 1130 and a gap calculation module 1140 .
- the receiving module 1110 is used for receiving the three-dimensional image from the line laser.
- the three-dimensional image includes: at least a portion of the boundary of the first component, at least a portion of the boundary of the second component, and at least one welding point located on the boundary of the first component and the second component.
- the conversion module 1120 is used to convert the 3D image into a 2D grayscale image.
- the fitting module 1130 is used to obtain the boundary of the first component and the boundary with the second component in the two-dimensional grayscale image.
- the gap calculation module 1140 is used to determine N vertical lines between the boundary of the first component and the boundary of the second component; and calculate the average value of the lengths of the N vertical lines as the gap between the first component and the second component, N is positive integer.
- the receiving module 1110 receives and provides to the converting module 1120 a three-dimensional image comprising two parts and the solder joints overlaid between the parts.
- the conversion module 1120 converts the 3D image into a 2D grayscale image.
- the fitting module 1130 performs edge extraction in the two-dimensional grayscale image generated by the conversion module 1120 to obtain the boundaries of the two components.
- the gap calculation module 1140 obtains the gap between the two components by calculating the length of the vertical line between the boundaries of the two components multiple times and taking the average value.
- One of the advantages of the embodiment of the present application is that when detecting the gap between components, a more accurate measurement result of the gap can be obtained by adopting the method of averaging multiple times of detection. Moreover, image acquisition based on line lasers can achieve continuous sampling while effectively eliminating a series of interference caused by traditional cameras due to light source occlusion and other factors.
- the fitting module 1130 may specifically include: a first fitting unit 1131 and a second fitting unit 1132 .
- the first sampling unit 1131 is used to set 20-50 first fitting units that equally divide the boundary area in the boundary area including the boundary of the first component, and connect each first fitting unit with the first component in turn.
- the points of intersection between the boundaries form the first straight line.
- the second fitting unit 1132 is used to set 20-50 second fitting units that equally divide the boundary area in the boundary area including the boundary of the second component; The point of intersection between them forms the second straight line.
- the gap calculation module 1140 is specifically configured to: determine the N distance between the N first fitting units and the boundary of the first part
- the intersection points are vertical line setting points, each of the vertical line setting points is used as a starting point to generate N vertical lines perpendicular to the second straight line or determine N between the N second fitting units and the second component boundary
- the intersection points are vertical line setting points, and each of the vertical line setting points is used as a starting point to generate N vertical lines perpendicular to the first straight line.
- Such a vertical line setting method is based on the fitting unit that generates a fitting straight line, and generates a plurality of vertical lines with the same number as the fitting unit and evenly distributed, which can be used to realize a gap detection method for multiple detection and averaging.
- the machine vision inspection device further includes: an edge extraction module 1150 and a solder joint width calculation module 1160 .
- the edge extraction module 1150 is used for extracting the third straight line and the fourth straight line respectively fitted to the two side boundaries of the solder joint in the two-dimensional grayscale image.
- Solder joint width calculation module 1160 is used to determine the first tangent point of the 3rd straight line tangent to the solder joint and the second tangent point of the fourth straight line tangent to the solder joint; calculate the distance between the first tangent point and the second tangent point as the width of the solder joint.
- Such a technical solution further performs automatic detection of the width of solder joints on the basis of two-dimensional grayscale image detection of component gaps, which is conducive to comprehensively evaluating the quality of the pre-soldering process.
- the edge extraction module 1150 is specifically configured to: respectively set 30-50 third fitting units that equally divide the boundary area on one side of the welding point in the boundary area on one side of the welding point; Connect the intersection points between each third fitting unit and the boundary on one side of the solder joint in turn to form a third straight line, and set 30-50 equally divided solder joints in the boundary area on the other side of the solder joint.
- the fourth fitting unit in the boundary area sequentially connect the intersection points between each fourth fitting unit and the boundary on the other side of the solder joint to form a fourth straight line.
- Such a design uses a method similar to component edge extraction to obtain the third straight line and the fourth straight line fitted to the boundaries on both sides of the solder joint, which can help complete the automatic detection of the solder joint width.
- the welding spot width calculation module 1160 is specifically configured to: connect the last third fitting unit on the third straight line to one side of the welding spot The intersection point between the boundaries is taken as the first tangent point; and the intersection point of the last fourth fitting unit on the fourth straight line and the boundary on the other side of the solder joint is taken as the second tangent point.
- the position of the last sampling unit of the third straight line and the fourth straight line is used as the two tangent points, and the positions of the two tangent points can be determined simply and quickly.
- the welding spot width calculation module 1160 is specifically configured to: use the intersection point between the third straight line and the first straight line as the first point of tangency; and the point of intersection between the fourth straight line and the first straight line is taken as the second point of tangency.
- Such a design utilizes the intersection between the fitting straight line of the component boundary and the fitting straight line on both sides of the solder joint to obtain two tangent points, and can also quickly and conveniently determine the positions of the two tangent points.
- the functional modules of the machine vision inspection device are divided according to the method steps to be executed.
- one or more functional modules (such as a receiving module, a conversion module, a fitting module, a gap calculation module, and an edge extraction module) in the machine vision detection device in the embodiments of the present application can be combined according to the needs of the actual situation. module and the welding spot width calculation module) are split into more functional modules to perform corresponding method steps.
- one or more functional modules in the power exchange device of the embodiments of the present application may also be integrated into fewer functional modules to perform corresponding method steps.
- FIG. 14 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- the electronic device may be a first controller, a second controller or any other suitable type of electronic computing platform for executing the above-mentioned image software system, and its specific implementation is not limited here.
- the electronic device may include: a processor 1310 , a communication interface 1320 , a memory 1330 and a communication bus 1340 .
- the processor 1310 , the communication interface 1320 and the memory 1330 communicate with each other through the communication bus 1340 .
- the communication interface 1320 is used for communication connection with other devices (such as image acquisition devices).
- the processor 1310 is used to call the program 1350 to execute one or more method steps in the machine vision inspection method in the above-mentioned embodiments or realize one or more functional modules in the machine vision inspection device in the above-mentioned embodiments.
- the program 1350 may include program codes or computer operation instructions.
- the processor 1310 may be a central processing unit, other general-purpose processors, digital signal processors, application-specific integrated circuits, off-the-shelf programmable gate arrays or other programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc.
- the memory 1330 is used to store the program 1350 .
- the memory 1330 may include a high-speed RAM memory, and may also include a non-volatile memory, such as at least one disk memory.
- the embodiment of the present application also provides a computer-readable storage medium.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- the computer readable storage medium stores a computer program.
- a complete computer program product is embodied on one or more computer-readable storage media (including but not limited to, disk storage, CD-ROM, optical storage, etc.) containing the computer program disclosed in the embodiments of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (13)
- 一种机器视觉检测方法,其特征在于,包括:接收来自线激光器的三维图像,所述三维图像中包含:至少一部分的第一部件的边界,至少一部分的第二部件的边界以及至少一个位于第一部件的边界和第二部件的边界上的焊点;将所述三维图像转换为二维灰度图像;在所述二维灰度图像中获取第一部件的边界和第二部件的边界;生成N条位于所述第一部件边界和第二部件边界之间的垂线;计算N条所述垂线长度的平均值作为所述第一部件和所述第二部件之间的间隙,N为正整数。
- 根据权利要求1所述的方法,其特征在于,所述在所述二维灰度图像中获取第一部件的边界和第二部件的边界,具体包括:在包含第一部件的边界的边界区域中,设置N个等分所述边界区域的第一拟合单元;依次连接每个所述第一拟合单元与所述第一部件的边界之间的交点,形成与所述第一部件边界拟合的第一直线;在包含第二部件的边界的边界区域中,设置N个等分所述边界区域的第二拟合单元;依次连接每个所述第二拟合单元与所述第二部件的边界之间的交点,形成与所述第二部件的边界拟合的第二直线;其中,N为20-50之间的正整数。
- 根据权利要求2所述的方法,其特征在于,所述生成N条位于所述第一部件的边界和第二部件的边界之间的垂线,具体包括:确定N个所述第一拟合单元与所述第一部件的边界之间的交点为垂线设置点;分别以每个所述垂线设置点为起点,生成与所述第二直线垂直的N条垂线;或者确定N个所述第二拟合单元与所述第二部件的边界之间的交点为垂线设 置点;分别以每个所述垂线设置点为起点,生成与所述第一直线垂直的N条垂线。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:在所述二维灰度图像中提取分别与所述焊点的两侧边界拟合的第三直线和第四直线;确定所述第三直线与所述焊点相切的第一切点以及所述第四直线与所述焊点相切的第二切点;计算所述第一切点和所述第二切点之间距离作为所述焊点的宽度。
- 根据权利要求4所述的方法,其特征在于,所述在所述二维灰度图像中提取分别与所述焊点的两侧边界拟合的第三直线和第四直线,具体包括:在包含所述焊点的第一侧边界的边界区域中,分别设置M个等分所述边界区域的第三拟合单元;依次连接每个所述第三拟合单元与所述焊点的第一侧边界之间的交点,形成所述第三直线,并且;在包含所述焊点的第二侧边界的边界区域中,分别设置M个等分所述边界区域的第四拟合单元;依次连接每个所述第四拟合单元与所述焊点的第二侧边界之间的交点,形成所述第四直线;其中,M为30-50之间的正整数。
- 根据权利要求5所述的方法,其特征在于,所述确定所述第三直线与所述焊点相切的第一切点以及所述第四直线与所述焊点相切的第二切点,具体包括:将所述第三直线上最后一个第三拟合单元与所述焊点的第一侧边界之间的交点作为所述第一切点;并且将所述第四直线上最后一个第四拟合单元与所述焊点的第二侧边界的交点作为所述第二切点。
- 根据权利要求5所述的方法,其特征在于,确定所述第三直线与所述焊点相切的第一切点以及所述第四直线与所述焊点相切的第二切点,具体包括:将所述第三直线与所述第一直线之间的交点作为所述第一切点;并且将所述第四直线与所述第一直线的之间交点作为所述第二切点。
- 一种机器视觉检测装置,其特征在于,包括:接收模块,用于接收来自线激光器的三维图像,所述三维图像中包含:至少一部分的第一部件的边界,至少一部分的第二部件的边界以及至少一个位于所述第一部件的边界和第二部件的边界上的焊点;转换模块,用于将所述三维图像转换为二维灰度图像;拟合模块,用于在所述二维灰度图像中获取所述第一部件的边界以及所述第二部件的边界;间隙计算模块,用于确定N条位于所述第一部件的边界和第二部件的边界之间的垂线;并且计算N条所述垂线长度的平均值作为所述第一部件与所述第二部件之间的间隙,N为正整数。
- 一种电子设备,其特征在于,包括处理器以及与所述处理器通信连接的处理器;所述存储器存储有计算机程序指令,所述计算机程序指令在被所述处理器调用时,以使所述处理器执行如权利要求1-7任一项所述的视觉检测方法。
- 一种非易失性计算机存储介质,其特征在于,所述非易失性计算机存储介质存储有计算机程序指令,以使所述计算机程序指令被处理器调用时,执行如权利要求1-7任一项所述的视觉检测方法。
- 一种视觉检测系统,其特征在于,包括:图像采集设备,所述图像采集设备包括若干线激光器,用于采集三维图像;驱动机构,用于使所述图像采集设备与待测部件之间发生相对移动;与所述图像采集设备通信连接的第一控制器,所述第一控制器用于处理所述三维图像,以使所述三维图像的处理结果用于所述待测部件的检测。
- 根据权利要求11所述的视觉检测系统,其特征在于,所述图像采集设备包括:两个线激光器、传感器支架以及遮光罩;两个所述线激光器分别设置在所述传感器支架的两侧;所述遮光罩固定在所述传感器支架上,罩套在所述线激光器外;所述传感器支架包括:高度调节模组和间距调节模组;所述高度调节模组用于调节所述线激光器所在的高度;所述间距调节模组用于调节两个所述线激光器之间的间距。
- 根据权利要求11所述的视觉检测系统,其特征在于,还包括:第二控制器,所述第二控制器用于控制所述高度调节模组和所述间距调节模组,以使两个所述线激光器达到目标间距和/或目标高度;所述第二控制器存储有若干个记录所述目标间距和目标高度的配置信息;每个配置信息与至少一种待测部件对应。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21969381.9A EP4286789A1 (en) | 2021-12-29 | 2021-12-29 | Machine vision detection method, detection apparatus and detection system thereof |
CN202180097091.3A CN117203486A (zh) | 2021-12-29 | 2021-12-29 | 机器视觉检测方法、其检测装置及其检测系统 |
JP2023553313A JP2024508331A (ja) | 2021-12-29 | 2021-12-29 | マシンビジョンによる検出方法、その検出装置及びその検出システム |
PCT/CN2021/142250 WO2023123003A1 (zh) | 2021-12-29 | 2021-12-29 | 机器视觉检测方法、其检测装置及其检测系统 |
KR1020237029433A KR20230134597A (ko) | 2021-12-29 | 2021-12-29 | 머신 비전 검출 방법, 이의 검출 장치 및 검출 시스템 |
US18/524,990 US20240095949A1 (en) | 2021-12-29 | 2023-11-30 | Machine vision detection method, detection apparatus and detection system thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/142250 WO2023123003A1 (zh) | 2021-12-29 | 2021-12-29 | 机器视觉检测方法、其检测装置及其检测系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/524,990 Continuation US20240095949A1 (en) | 2021-12-29 | 2023-11-30 | Machine vision detection method, detection apparatus and detection system thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023123003A1 true WO2023123003A1 (zh) | 2023-07-06 |
Family
ID=86996914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/142250 WO2023123003A1 (zh) | 2021-12-29 | 2021-12-29 | 机器视觉检测方法、其检测装置及其检测系统 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240095949A1 (zh) |
EP (1) | EP4286789A1 (zh) |
JP (1) | JP2024508331A (zh) |
KR (1) | KR20230134597A (zh) |
CN (1) | CN117203486A (zh) |
WO (1) | WO2023123003A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117094964A (zh) * | 2023-08-17 | 2023-11-21 | 正泰集团研发中心(上海)有限公司 | 电池片间距检测方法、装置、计算机设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2495525A1 (de) * | 2011-03-01 | 2012-09-05 | SmartRay GmbH | Optisches Prüfverfahren mittels Intensitätsverlauf |
DE102014016087A1 (de) * | 2014-11-03 | 2014-12-24 | In-Situ Gmbh | Dreidimensionale optische Erfassung von Objektoberflächen |
CN110530278A (zh) * | 2019-10-09 | 2019-12-03 | 易思维(杭州)科技有限公司 | 利用多线结构光测量间隙面差的方法 |
CN111630342A (zh) * | 2018-08-29 | 2020-09-04 | 深圳配天智能技术研究院有限公司 | 视觉焊接系统的缝隙检测方法以及系统 |
CN111901499A (zh) * | 2020-07-17 | 2020-11-06 | 青岛聚好联科技有限公司 | 一种计算视频图像中像素实际距离的方法及设备 |
-
2021
- 2021-12-29 WO PCT/CN2021/142250 patent/WO2023123003A1/zh active Application Filing
- 2021-12-29 KR KR1020237029433A patent/KR20230134597A/ko unknown
- 2021-12-29 CN CN202180097091.3A patent/CN117203486A/zh active Pending
- 2021-12-29 EP EP21969381.9A patent/EP4286789A1/en active Pending
- 2021-12-29 JP JP2023553313A patent/JP2024508331A/ja active Pending
-
2023
- 2023-11-30 US US18/524,990 patent/US20240095949A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2495525A1 (de) * | 2011-03-01 | 2012-09-05 | SmartRay GmbH | Optisches Prüfverfahren mittels Intensitätsverlauf |
DE102014016087A1 (de) * | 2014-11-03 | 2014-12-24 | In-Situ Gmbh | Dreidimensionale optische Erfassung von Objektoberflächen |
CN111630342A (zh) * | 2018-08-29 | 2020-09-04 | 深圳配天智能技术研究院有限公司 | 视觉焊接系统的缝隙检测方法以及系统 |
CN110530278A (zh) * | 2019-10-09 | 2019-12-03 | 易思维(杭州)科技有限公司 | 利用多线结构光测量间隙面差的方法 |
CN111901499A (zh) * | 2020-07-17 | 2020-11-06 | 青岛聚好联科技有限公司 | 一种计算视频图像中像素实际距离的方法及设备 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117094964A (zh) * | 2023-08-17 | 2023-11-21 | 正泰集团研发中心(上海)有限公司 | 电池片间距检测方法、装置、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20240095949A1 (en) | 2024-03-21 |
KR20230134597A (ko) | 2023-09-21 |
CN117203486A (zh) | 2023-12-08 |
JP2024508331A (ja) | 2024-02-26 |
EP4286789A1 (en) | 2023-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108240793A (zh) | 物体尺寸测量方法、装置和系统 | |
CN107121093A (zh) | 一种基于主动视觉的齿轮测量装置及测量方法 | |
CN107907063B (zh) | 一种基于视觉测量的钢带冲孔加工检测系统及方法 | |
KR102056076B1 (ko) | 용접비드 비전 검사 장치 및 용접 불량 검사방법 | |
CN102735138B (zh) | 测量结构以及用于至少确定压接触头的导线压接体的压接高度的方法 | |
CN107869954B (zh) | 一种双目视觉体积重量测量系统及其实现方法 | |
CN107703513B (zh) | 一种基于图像处理的非接触式接触网相对位置检测方法 | |
US20240095949A1 (en) | Machine vision detection method, detection apparatus and detection system thereof | |
JP5971028B2 (ja) | 回転角度計測装置及び方法 | |
CN102889864A (zh) | 带卷边部物体塔形检测系统及其检测方法 | |
CN110570412A (zh) | 一种零件误差视觉判断系统 | |
CN110470247B (zh) | 一种零件内外圆面同轴度的检测装置及检测方法 | |
CN111397529A (zh) | 一种基于双目视觉结构光的复杂表面形状检测方法 | |
CN109829897B (zh) | 一种齿轮毛刺检测方法及齿轮高精度视觉测量系统 | |
CN209640238U (zh) | 用于弧面外观检测的机器视觉系统 | |
CN107462187B (zh) | 陶瓷插芯同轴度检测时光斑圆心确定方法及装置 | |
JP4549931B2 (ja) | ミキシングベーン検査方法と検査装置 | |
WO2023097491A1 (zh) | 机器视觉检测方法、其检测装置及其检测系统 | |
CN111435075A (zh) | 一种计算机视觉测量系统 | |
CN211697564U (zh) | 滤光片检测设备 | |
CN114322780A (zh) | 在线监测贴装重复精度方法 | |
CN211042118U (zh) | 一种三维检测系统 | |
CN116907360B (zh) | 基于机器视觉的ro膜胶线宽度检测系统及检测方法 | |
JP5912666B2 (ja) | 計測装置及びその処理方法 | |
CN110907470A (zh) | 滤光片检测设备及滤光片检测方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21969381 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20237029433 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 21969381.9 Country of ref document: EP Ref document number: 1020237029433 Country of ref document: KR Ref document number: 2021969381 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023553313 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 2021969381 Country of ref document: EP Effective date: 20230829 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180097091.3 Country of ref document: CN |