CN116228636A - Image processing method, image processing system and related equipment - Google Patents

Image processing method, image processing system and related equipment Download PDF

Info

Publication number
CN116228636A
CN116228636A CN202211585630.0A CN202211585630A CN116228636A CN 116228636 A CN116228636 A CN 116228636A CN 202211585630 A CN202211585630 A CN 202211585630A CN 116228636 A CN116228636 A CN 116228636A
Authority
CN
China
Prior art keywords
image
detected
image processing
images
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211585630.0A
Other languages
Chinese (zh)
Inventor
邱林飞
曾宪钎
夏雨
高建光
周兵兵
方上海
徐纪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuxiang Precision Industrial Kunshan Co Ltd
Original Assignee
Fuxiang Precision Industrial Kunshan Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuxiang Precision Industrial Kunshan Co Ltd filed Critical Fuxiang Precision Industrial Kunshan Co Ltd
Priority to CN202211585630.0A priority Critical patent/CN116228636A/en
Publication of CN116228636A publication Critical patent/CN116228636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25BTOOLS OR BENCH DEVICES NOT OTHERWISE PROVIDED FOR, FOR FASTENING, CONNECTING, DISENGAGING OR HOLDING
    • B25B11/00Work holders not covered by any preceding group in the subclass, e.g. magnetic work holders, vacuum work holders
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25HWORKSHOP EQUIPMENT, e.g. FOR MARKING-OUT WORK; STORAGE MEANS FOR WORKSHOPS
    • B25H1/00Work benches; Portable stands or supports for positioning portable tools or work to be operated on thereby
    • B25H1/14Work benches; Portable stands or supports for positioning portable tools or work to be operated on thereby with provision for adjusting the bench top
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application discloses an image processing method, an image processing system and related equipment, and relates to the technical field of image processing, wherein the processing method comprises the following steps: acquiring a first image of an object to be detected; processing a first image into a plurality of second images with different light source effects; processing the plurality of second images into a plurality of corresponding third images in a preset format; identifying a target region of the third image; and determining whether the target area has defects according to the preset information. After the electronic equipment obtains the first image of the object to be detected, the first image is processed into a plurality of second images with different light source effects, so that the images with the different light source effects of the object to be detected can be obtained more quickly, whether the object to be detected has various defects or not is detected, the accuracy of detecting the defects of the object to be detected is improved, and the image processing system and the related equipment are further provided.

Description

Image processing method, image processing system and related equipment
Technical Field
The present application relates to the field of image processing technology, and more particularly, to an image processing method, an image processing system, an electronic device, and a storage medium.
Background
Currently, in the aspect of industrial vision, a multi-task workstation formed based on an area-array camera or a linear-array camera can be used for measuring and detecting products. Specifically, the image acquisition workstation of the multi-task workstation comprises an area array camera or a linear array camera and light sources, wherein each light source corresponds to one camera, so that after the camera acquires the picture of an object, the picture of the object can be used for separating out the light source effect image of the light source. If multiple light source effect images are needed, multiple cameras are needed to correspond to multiple light sources one by one, the equipment placement space is occupied, the equipment cost is high, and the waiting time is long. Therefore, the drawing time of the multi-task station is long.
The image acquisition workstation acquires images of various light sources of the product and then sends the images to the processing and analyzing workstation. Under the condition that a plurality of various light source effect graphs are needed, the graph taking time is long, the graph taking speed is low, the processing and analysis results of the processing and analysis stations on the images can be affected, and the processing and analysis results of the images can be inaccurate. That is, due to the excessive number of images acquired by the image acquisition station and the excessive time of image acquisition, the CT results are not accurate enough, for example, the CT speed is reduced, which may affect the CT (Computed Tomography, CT) of the whole equipment of the multi-task station.
Disclosure of Invention
In view of the above, the embodiments of the present application provide an image processing method, an image processing system, an electronic device, and a storage medium, which can obtain images of various light source effects of an object to be detected more quickly, and improve accuracy of detecting defects of the object to be detected.
A first aspect of the present application provides an image processing method, including: acquiring a first image of an object to be detected; processing one first image into a plurality of second images with different light source effects; processing the plurality of second images into a plurality of corresponding third images in a preset format; identifying a target region of the third image; and determining whether the target area has defects according to preset information.
Therefore, after the electronic equipment acquires the first image of the object to be detected, the first image is processed into a plurality of second images with different light source effects, so that the images with the different light source effects of the object to be detected can be obtained more quickly, different defects of the object to be detected can be calculated according to the different second images, then each second image is further processed, namely, the second image is processed into a third image in a set format, a defect area which is possibly generated in the third image is identified, whether the target area has defects is determined according to preset information, and therefore whether the object to be detected has various defects or not is detected, and the accuracy of detecting the defects of the object to be detected is improved.
As an optional implementation manner of the first aspect, the acquiring a first image of the object to be detected includes: acquiring continuous multi-row or continuous multi-column linear array scanning images of the object to be detected; and obtaining the first image according to the composition of the continuous multi-row or continuous multi-column linear array scanning images of the object to be detected.
As such, the first image of the object to be detected may be composed of a continuous multi-row or continuous multi-column linear array scan image.
As an optional implementation manner of the first aspect, the first image has a first resolution and a first angle, or a second resolution and a second angle, wherein the first resolution is higher than the second resolution, and the first angle is higher than the second angle.
Thus, the electronic device can acquire a first image with a first resolution and a first angle, and can acquire a second image with the first resolution and the first angle. The electronic device detects most defects of the object to be detected by using the first image with the first resolution and the first angle, and can perform supplementary detection on the first image with the first resolution and the first angle by using the second image with the second resolution and the second angle.
A second aspect of the present application provides an image processing system, the image processing system including a plurality of light sources, an image acquisition device, and an image processing device, the plurality of light sources being configured to emit a plurality of light beams to an object to be detected; the image acquisition device comprises a channel for receiving reflected light of the light beams, wherein the channel forms different angles with a plurality of the light beams so as to form different light source effects; the image acquisition device is used for acquiring a first image of the object to be detected; the image processing device is used for processing one first image into a plurality of second images with different light source effects; processing the plurality of second images into a plurality of corresponding third images in a preset format; identifying a target region of the third image; and determining whether the target area has defects according to preset information.
Therefore, when the channel and the light beams form a plurality of different angles, a plurality of different light source effects are formed, the light sources are switched to emit sequentially, the light source effects are orderly and alternately arranged in the first image in an imaging mode, the first image of an object to be detected is collected by the image collecting device and then sent to the image processing device, and the image processing device can process the first image into a plurality of second images with different light source effects. Compared with a multi-task station in which each light source corresponds to one camera so as to acquire images with various light source effects, the image processing system of the embodiment of the application can acquire second images with different light source effects of an object to be detected by only using one image acquisition device, is lower in cost and smaller in equipment occupation space, and has higher speed of acquiring the second images with different light source effects of the object to be detected. Different defects of the object to be detected can be calculated according to different second images, then each second image is further processed, namely, the second image is processed into a third image with a set format, a possible defect area of the third image is identified, whether the target area has defects or not is determined according to preset information, so that whether the object to be detected has various defects or not is detected, the accuracy of detecting the defects of the object to be detected is improved,
As an optional implementation manner of the second aspect, the image processing apparatus is further configured to acquire a continuous multi-row or continuous multi-column linear array scan image of the object to be detected; and obtaining the first image according to the composition of the continuous multi-row or continuous multi-column linear array scanning images of the object to be detected.
In this way, the image processing apparatus can obtain a first image composed of a continuous multi-line or continuous multi-column line scan image.
As an optional implementation manner of the second aspect, the first image acquired by the image processing apparatus has a first resolution and a first angle, or a second resolution and a second angle, where the first resolution is higher than the second resolution, and the first angle is higher than the second angle.
Thus, the image processing apparatus can acquire a first image having a first resolution and a first angle, and can acquire a second image having the first resolution and the first angle. The electronic device detects most defects of the object to be detected by using the first image with the first resolution and the first angle, and can perform supplementary detection on the first image with the first resolution and the first angle by using the second image with the second resolution and the second angle.
As an optional implementation manner of the second aspect, the image processing system further includes a moving unit, where the moving unit is configured to control the object to be detected to perform multi-angle rotation.
As an optional implementation manner of the second aspect, the image processing system further includes a display terminal, where the display terminal is configured to display whether the first image and the target area have a defect.
A third aspect of the present application provides an electronic device, including a processor and a memory, where the memory stores a computer program, and the computer program, when executed by the processor, implements the image processing method described above.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method described above.
The technical effects obtained by the third aspect and the fourth aspect are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described in detail herein.
The beneficial effects that this application provided technical scheme brought include at least: after the electronic equipment acquires the first image of the object to be detected, the first image is processed into a plurality of second images with different light source effects, so that the images with the different light source effects of the object to be detected can be obtained more quickly, different defects of the object to be detected can be calculated according to the different second images, then each second image is further processed, namely, the second image is processed into a third image with a set format, a defect area which is possibly generated by the third image is identified, whether the target area has defects is determined according to preset information, and therefore whether the object to be detected has various defects is detected, and the accuracy of detecting the defects of the object to be detected is improved.
Drawings
FIG. 1 is a schematic diagram of an image processing system according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an application scenario of an image processing system of an embodiment of the present application;
FIG. 3 is a schematic illustration of an application scenario of an image processing system of an embodiment of the present application;
FIG. 4 is a schematic illustration of an application scenario of an image processing system of an embodiment of the present application;
FIG. 5 is a schematic illustration of an application scenario of an image processing system of an embodiment of the present application;
FIG. 6 is a schematic diagram of a mobile device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an object to be detected according to an embodiment of the present application;
FIG. 8 is a flow chart of acquiring a first image by the image processing system of the present embodiment;
FIG. 9 is a schematic diagram of an image processing system according to an embodiment of the present application;
fig. 10 is a schematic diagram of an application scenario of a display terminal according to an embodiment of the present application;
FIG. 11 is a schematic illustration of an application scenario of an image processing system of an embodiment of the present application;
fig. 12 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The related art will be briefly described below.
Currently, in the aspect of industrial vision, a multi-task workstation formed based on an area-array camera or a linear-array camera can be adopted to measure and detect an object to be detected. Specifically, the multi-task workstation can comprise an image acquisition workstation and a processing and analyzing workstation, the image acquisition workstation of the multi-task workstation comprises an area array camera or a linear array camera and light sources, each light source corresponds to one camera, after the light sources emit light beams to the objects to be detected, the light is reflected from the objects to be detected to the cameras, so that after the cameras acquire pictures of the objects to be detected, the pictures of the objects to be detected can be separated into light source effect diagrams of the light sources. If multiple light source effect images are needed, multiple cameras are needed to be in one-to-one correspondence with multiple light sources, so that multiple light source effect images are acquired, the equipment placement space is occupied, the equipment cost is high, and the waiting time is long. Therefore, the drawing time of the multi-task station is long.
The image acquisition workstation acquires images of various light sources of the object to be detected, and then sends the images to the processing and analyzing workstation, and the processing and analyzing workstation processes and analyzes the images and measures and detects the object to be detected according to the processing and analyzing results. Under the condition that a plurality of various light source effect graphs are needed, the graph taking time is long, the graph taking speed is low, the processing and analysis results of the server on the images can be affected, and the processing and analysis results of the images can be inaccurate. That is, due to the excessive number of images acquired by the image acquisition station and the excessive time of image acquisition, the CT results are not accurate enough, for example, the CT speed is reduced, which may affect the CT (Computed Tomography, CT) of the whole equipment of the multi-task station.
Therefore, the embodiments of the present application provide an image processing method, an image processing system, an electronic device, and a storage medium, which can obtain images of various light source effects of an object to be detected more quickly, and improve the accuracy of detecting defects of the object to be detected.
Referring to fig. 1, the present application provides an image processing system 1000, where the image processing system 1000 includes a light source 100, an image capturing device 200 and an image processing device 300. The image processing apparatus 300 controls the power source to emit a light beam and controls the image acquisition apparatus 200 to acquire the first image in response to an instruction to start detection.
Referring to fig. 2, a light source 100 is configured to emit a plurality of light beams from an object 22 to be detected, and the light beams irradiate the object 22 to be detected to form reflected light, and the reflected light enters an image acquisition device 200. The light source 100 may include a light source controller that controls the light source to emit a light beam or to stop emitting a light beam. The object to be inspected 22 includes various products to be inspected. The image capturing device 200 includes a channel 21, and reflected light of the light beam enters the image capturing device 200 after passing through the channel 21, where the channel 21 forms an included angle, i.e. an angle α with the light beam emitted by the light source 100. The number of light sources 100 may be 2 or more, and the angle of the angle α formed by the light beam emitted by each light source 100 and the channel 21 may be different. When the channel 21 forms a plurality of different angles with the plurality of light beams, a plurality of different light source irradiation effects are formed, that is, after the image acquisition device 200 acquires a first image of the object 22 to be detected, the first image is sent to the image processing device 300, and the image processing device 300 can process the first image into a plurality of second images with different light source effects. The light source effect image of each light source can observe different defects of the object 22 to be detected, for example, whether the object to be detected has dust or not can be calculated according to the second image of one light source effect, and whether the object to be detected has defects or not can be calculated according to the second image of the other light source effect. Different kinds of light sources can be classified into three major types, such as dark field, intermediate field or bright field. The dark field is the angle of the included angle such that reflected light does not enter the image capturing device 200. The intermediate field refers to the angle of the included angle that allows a portion of the emitted light to enter the image capture device 200. Bright field refers to the angle of the included angle that allows the emitted light to substantially enter the image capture device 200.
It will be appreciated that when the channel 21 forms a plurality of different angles with the plurality of light beams, a plurality of different light source irradiation effects are formed, and after the image acquisition device 200 acquires a first image of the object 22 to be detected, the first image is sent to the image processing device 300, and the image processing device 300 can process the first image into a plurality of second images with different light source effects. Compared with a multi-task station in which each light source corresponds to one camera so as to acquire images with multiple light source effects, the image processing system 1000 in the embodiment of the present application can acquire the second images with multiple different light source effects of the object to be detected 22 only by using one image acquisition device 200, and has the advantages of lower cost, smaller equipment occupation space and faster speed of acquiring the second images with multiple different light source effects of the object to be detected 22.
In one example, fig. 3 is an application scenario diagram of an image processing system 1000, the image processing system 1000 having 6 light sources 100, a first light source 110, a second light source 120, a third light source 130, a fourth light source 140, a fifth light source 150, and a sixth light source 160, respectively. The angle formed by the light beam emitted by the fourth light source 140 and the channel of the image acquisition device 200 for receiving the reflected light is about 90 degrees, so as to form a bright field. The angles formed by the light beams emitted from the third light source 130 and the fifth light source 150 and the channel of the image pickup device 200 receiving the emitted light are about 75 ° and about 105 °, respectively, to form an intermediate field. The angles formed by the light beams emitted from the first, second and sixth light sources 110, 120 and 160 and the channel of the image capturing device 200 receiving the reflected light are about 0 °, about 60 ° and about 120 °, respectively, to form a dark field.
It will be appreciated that the second image of the different light source effects is advantageous in that it is possible to observe different defects of the object 22 to be detected, for example, whether dust is present on the object 22 to be detected is easier to observe than the second image formed by the light beam of the first light source 110.
The image capturing device 200 may include a line camera, i.e., a line charge coupled device (Charge Coupled Device, CCD), and the resolution of the line CCD may be varied, for example, 16k, 8k, etc. The linear array camera acquires continuous multi-line or continuous multi-column linear array scanning images of the object 22 to be detected and then sends the images to the image processing device 300, and the image processing device 300 obtains a first image according to the acquired continuous multi-line or continuous multi-column linear array scanning images of the object 22 to be detected.
The parameters related to the light source 100 and the image capturing device 200 may be set as follows:
first, the resolution of the image capturing device 200 is set based on the resolution of human eyes. The derivation process for setting the resolution of the image acquisition apparatus 200 may be: the size of the object to be detected is as follows: l=312×220mm, where L is the length of the object to be detected, and W is the width of the object to be detected. The minimum area of the defect of the object to be detected is S=R 2 =0.14mm*0.14mm≈0.02mm 2 . The minimum area for detecting the defect of the object to be detected is 0.07mm with the resolution of human eyes 2 . Based on human eye imaging, the linear array camera acquires 3 to 5 pixels to image. Thus, the resolution of the image pickup device 200 can be set to 0.07/3≡22 μm/pixel.
In one example, the resolution of the image capturing device 200 is set to 312mm/14117 pixel=22.1 um/pix according to the derivation of the resolution of the image capturing device 200 described above. The field of view (FieldOfView, FOV) of the image pickup device 200 is set to fov=400 mm, which is greater than 312mm. The optical magnification of the image pickup device 200 may be set to 81.9/fov≡0.204M. According to the parameters of resolution, FOV and optical magnification set by the image acquisition device 200, searching a lens profile of the image acquisition device 200, and determining a lens closest to the parameters, wherein the relevant parameters of the lens are shown in the following table:
Figure BDA0003990539700000051
the working distance WD at an optical magnification of 0.26x was 663.3mm, and it was found that the working distance WD at an optical magnification of 0.26x was 663.3x (0.26/0.204) =845 mm.
Next, the light emitting surface size of the light source 100 is set. There are two cases, the first case is that the surface of the object to be detected 22 is rough, and since the light beam irradiates on the rough surface, scattered light is easily generated, and direct reflection of light can be eliminated to some extent, so that the light source becomes a uniform light source, and at this time, the size of the light emitting surface of the light source 100 is slightly larger than the object to be detected. Referring to fig. 4, in the second case that the surface of the object to be detected is smooth, the light beams are emitted from the a 'and B' positions of the light emitting surface a 'B' of the light source 100, and after reaching the object to be detected 22, the light beams are emitted into the lens 41 of the image capturing device 200, and at this time, the light emitting surface a 'B' of the light source 100 has the dimensions of:
A’B’=(H/WD+1)*FOV
[(700-200)/845+1]*400≤A’B’≤[(700+200)/845+1]*400
[(700-200)/845+1]*400≤A’B’≤[(700+200)/845+1]*400
636mm≤A’B’≤826mm
Wherein A 'B' is the light source luminous surface, and the unit is mm; h is the distance from the light source luminous surface A 'B' to the object 22 to be detected, the distance value is a measured value, and the unit is mm; WD is the working distance, that is, the distance from the lens 41 to the object 22 to be detected, and the distance is 845mm; the FOV is a field of view of the lens 41 of the image pickup device 200, and the value of FOV is 400mm.
It can be understood that the detection standard of some objects to be detected is based on the defects observed by human eyes, that is, the objects to be detected are detected to be defective, and the objects to be detected are unqualified products, so that the embodiment of the application is based on the resolution of human eyes, thereby setting the relevant parameters of the light source 100 and the image acquisition device 200, simulating the defects of the objects to be detected by human eyes, and reasonably controlling the production cost.
The light source 100 includes a light source controller capable of sequentially switching light source emission of a plurality of light sources when the image pickup device 200 picks up the first image, so that the plurality of light source images are sequentially arranged at intervals in the first image.
The image processing apparatus 300 is also an electronic device including a personal computer, a mobile phone, a server, and the like. After the image processing apparatus 300 acquires the first image of the object to be measured from the image acquisition apparatus 200, the first image is split into a plurality of second images with different light source effects by using image processing software, and feature information of the second images is extracted to form a third image with a preset format, for example, the image processing software Halcon is used to extract different feature information of the second images to form a third image with a preset format, such as a glossiness image, an X-direction gradient image, a Y-direction gradient image, a dust background image, and the like, respectively. Then, a target area of the third image in a preset format is identified by using a correlation algorithm, wherein the target area refers to an area possibly having a defect. A defect refers to a failure to meet the criteria of the object to be detected. For example, if it is calculated that a certain region in the third image does not meet the standard of the object to be detected, the region is referred to as a defective region. And finally, comparing the area with possible defects with preset information, and further judging whether the area with possible defects has defects or not. The preset information includes various information of the standard object to be detected, such as the length, width, gray scale, curvature, etc., of the standard object to be detected, and may be obtained from an upper computer or may be stored in the image processing apparatus 300 in advance.
It can be appreciated that after the image processing apparatus 300 obtains the first image of the object to be detected sent by the image acquisition apparatus 200, the image processing software is utilized to split the first image into a plurality of second images with different light source effects, different defects of the object to be detected can be calculated according to the different second images, and then each second image is further processed, that is, the second image is processed into a third image with a set format, a possible defect area of the third image is identified, whether a defect exists in the target area is determined according to preset information, so that whether multiple defects exist in the object to be detected 22 is detected, and the accuracy of detecting the defects of the object to be detected 22 is improved.
As an alternative embodiment, referring to fig. 3, the image processing system 1000 further includes a first bracket 31, where the first bracket 31 is configured to fix the plurality of light sources 100 and adjust an angle formed by an emitted light beam of the light sources 100 and the channel 21 of the image capturing device 200 that receives the reflected light. By adjusting the angle, a plurality of light beams can form different kinds of light source effects.
As an alternative embodiment, the image processing system 1000 further includes a second support 32, where the number of image capturing devices 200 may be plural, and the second support 32 is configured to support the image capturing devices 200 and adjust the heights of the image capturing devices 200 and the object 22 to be detected, so that the angles between the scan lines emitted by the image capturing devices 200 and the object 22 to be detected are different.
For example, the image processing system 1000 may include a first image capturing device 211 and a second image capturing device 212, the first image capturing device 211 being configured to acquire a first image of a first resolution and a first angle of the object to be detected 22, the second image capturing device 212 being configured to acquire a second image of a second resolution and a second angle of the object to be detected 22. The first image capturing device 211 may be a linear array CCD with a resolution of 16K, and the second image capturing device 212 may be a linear array CCD with a resolution of 8K, as illustrated in fig. 5, the scan line emitted by the first image capturing device 211 forms an included angle β with the object 22 to be detected, where the included angle β is a first angle. The scan lines emitted by the second image capturing device 212 form an included angle γ with the object 22 to be detected, where the included angle γ is a second angle, and the included angle β is greater than the included angle γ, that is, the first angle is higher than the second angle.
It can be appreciated that the number of image capturing devices 200 may be plural, and the image processing system 1000 may adjust the heights of the plurality of image capturing devices 200 and the object 22 to be detected using the second bracket 32, so that the height of the image capturing device 200 with high resolution and the object 22 to be detected is higher than the height of the image capturing device 200 with low resolution. The high-resolution image capturing device 211 scans the object 22 to be detected at a high angle, and can detect and measure most defects of the object 22 to be detected. The low resolution image capturing device 200 scans the object 22 to be detected at a low angle, and can perform a complementary detection and measurement for the high resolution image capturing device 200.
As an alternative embodiment, referring to fig. 3 again, the image processing system 1000 further includes a moving device 400, and the image processing device 300 controls the moving device 400 to move to a preset position in response to an operation instruction to start detecting the object 22 to be detected. The moving device 400 may be used for placing the object 22 to be detected and controlling the object 22 to be detected to move in multiple angles.
Referring to fig. 6, the mobile device 400 may include a sliding rail 410, a third bracket 420, and a platform 430. The slide rail 410 is slidably connected to the third support 420, and the third support 420 is rotatably connected to the platform 430, for example, the third support 420 is hinged to the platform 430 by a hinge mechanism, and the platform 430 can be rotated from the +z axis to the-Z axis or from the-Z axis to the +z axis. The platform 430 is used for carrying the object 22 to be detected, for example, a fixing member is provided on the platform 430, and the fixing member can fix the object 22 to be detected. In another possible implementation, the platform 430 may include a magnet material, and when the object to be detected 22 also includes the magnet material, the magnetic material of the platform 430 and the magnetic material of the object to be detected are magnetically attracted, so that the object may be firmly disposed on the platform 430. The platform 430 rotates with the third support 420 to rotate the object 22 to be detected, and the sliding of the third support 420 on the sliding rail 410 moves the object 22 to be detected. That is, the moving device 400 may control the object 22 to be detected to translate or rotate, or may control the object 22 to be detected to translate and rotate simultaneously. In order to enable the object 22 to be detected to rotate by 360 degrees, the platform 430 is further provided with a clamp capable of enabling the object 22 to be detected to rotate, and after the clamp clamps the object 22 to be detected, the platform 430 can control the object 22 to be detected to rotate by 0-360 degrees.
As an alternative embodiment, the image processing apparatus 300 controls the moving apparatus 400 to move to the preset position in response to the operation instruction to start detecting the object 22 to be detected, and the light source 100 emits the light beam to the object 22 to be detected when receiving that the moving apparatus 400 is at the preset position. Meanwhile, when the image capturing device 200 also receives the position information of the mobile device 400 at the preset position, the first image of the object 22 to be detected is obtained according to the position information. The preset position may be set by a developer.
The preset position may be a position where the image capturing device 200 is located in a position where the first image of a certain surface of the object 22 to be detected can be obtained, for example, as shown in fig. 7, one object 22 to be detected is a cuboid, and the preset position may be a position where the image capturing device 200 can obtain the first image of the a surface, the B surface, the C surface, the D surface, the E surface, or the F surface of the object 22 to be detected.
Referring to fig. 8, fig. 8 is a flowchart of the image processing system 1000 acquiring the first image of the cuboid. The positions II to III are preset positions, and the first image capturing device 200 of the image processing system 1000 adopts a longitudinal linear scan. First, the image processing apparatus controls the moving apparatus 400 to be located at an I position for placing the object 22 to be detected on the moving apparatus 400, and the moving apparatus 400 is located at the I position such that the a-plane of the object 22 to be detected faces the image capturing apparatus 200. The moving device 400 is then moved to position II, where an edge a1 of the a-plane is just scanned by the image capturing device 200. The moving device 400 continues to move to the III position, and at this time, the other side a2 of the a-plane can be just scanned by the image capturing device 200. That is, the a-plane may be completely scanned by the image capturing device 200 as the mobile device 400 moves from the II-position to the III-position. Meanwhile, when the mobile device 400 moves from the position II to the position III, the light source 100 emits the light beam to the surface a after receiving the trigger signal of the image capturing device 200, so that the captured image capturing device 200 captures the first image of the surface a. The moving device 400 then continues to move to the IV position, and the moving device 400 controls the object 22 to be detected to rotate 180 ° around the R axis, and makes the C-plane of the object 22 to be detected face the image capturing device 200. The mobile device 400 moves again from the I position to the IV position, and the image capturing device 200 captures a first image of the C-plane when the mobile device 400 moves from the II position to the III position. Then, the moving device 400 controls the object 22 to be detected to rotate 90 ° around the R axis, and makes the F surface of the object 22 to be detected face the image capturing device 200, the moving device 400 moves from the I position to the IV position again, and when the moving device 400 moves from the II position to the III position, the image capturing device 200 captures a first image of the F surface. The moving device 400 controls the object 22 to be detected to rotate 180 degrees around the R axis, and makes the E surface of the object 22 to be detected face the image acquisition device 200, the moving device 400 moves from the I position to the IV position again, and when the moving device 400 moves from the II position to the III position, the image acquisition device 200 acquires a first image of the E surface. Similarly, the final image capturing device 200 may capture the a, B, C, D, E, and F surfaces of the object 22 to be detected.
As an alternative embodiment, the light source controller of the light source 100 may have a time-sharing strobe function, and may sequentially switch the plurality of light sources 100 according to a certain frequency. In an example, referring to fig. 7 again, after the mobile device 400 moves to the position II, the image capturing device 200 generates a start action according to the position information of the mobile unit, generates a trigger signal and sends the trigger signal to the light source controller, and the light source controller controls 6 light sources 100 to sequentially emit light beams to the object to be detected in the process of moving the mobile device 400 from the position II to the position III. So that the captured image capturing device 200 captures a first image of the a-plane and processes the first image of the a-plane into a second image having 6 light source effects in a subsequent operation. The second images of the 6 light source effects may be divided into a second image of the middle field effect, a second image of the bright field effect, and a second image of the dark field effect. And so on, the A, B, C, D, E and F surfaces of the object to be detected can be obtained, and the second images of the 6 light source effects of each surface can be obtained.
As can be appreciated, the mobile device 400 controls the object to be detected to move in multiple angles, and the light source 100 emits light beams to the object to be detected according to the position information of the mobile device 400. That is, the mobile device 400 of the image processing system, the light source 100 and the image acquisition device 200 cooperate with each other to simulate the behavior of the human eye to observe the object to be detected at multiple angles, obtain the images of the object to be detected at multiple angles, and accelerate the acquisition speed of the images.
As an alternative embodiment, referring to fig. 9, the image processing system 1000 further includes a display terminal 500, after the image processing apparatus 300 processes and analyzes the first image of the object 22 to be detected, information about whether there are multiple defects and types of defects in the object 22 to be detected can be obtained, and the image processing apparatus 300 can send the first image of the object to be detected and the information to the display terminal 500.
The display terminal 500 displays an interface as shown in fig. 10. A high angle display area 81 and a low angle display area 82 and a result information display area 83 for measuring and measuring results of the object to be detected are displayed on the interface. The high angle display area 81 displays an image processed by the line camera high angle scanning and image processing apparatus 300, and the low angle display area 82 displays an image processed by the line camera low angle scanning and image processing apparatus 300. The high angle display area 81 is also divided into a first face display area 81a of the object to be detected and a second face display area 81b of the object to be detected. Also, the low angle display area 82 is further divided into a first face display area 82a of the object to be detected and a second face display area 81b of the object to be detected. For example, the result information display area 83 displays the defect name, whether or not a defect exists, the number of defects, and the like of the object to be detected. For example, the result information display area 83 displays that the defect names of the objects to be detected are heterochromatic, sagging, and overgrinding, that the defects corresponding to the defect names do not exist, and that the number of defects is zero. The interface can be a UI interface, and the UI interface can be developed by mixed programming of C# combined PLC, halcon and AI.
As an alternative embodiment, the image processing system 1000 may also include a programmable logic controller (Programmable Logic Controller, PLC). The PLC generates a high-frequency trigger pulse according to the starting action of the linear array camera, the action of the linear array camera for starting to acquire a first image of the object to be detected, or the action of the linear array camera for positioning the object to be detected again and acquiring the first image of the object to be detected again, and sends the high-frequency trigger pulse to the light source controller.
In one example, please refer to fig. 11, which illustrates a certain application scenario of the image processing system 1000 of the present application. The user places the object to be detected on the mobile device 400 (not shown in the above figures), and after responding to the operation instruction to start detecting the object to be detected 22, the image processing device 300 sends a movement instruction to the mobile device 400, and the mobile device 400 controls the object to be detected to move to a preset position. When the line camera 200 receives the information that the object to be detected is located at the preset position, the first image of the object to be detected starts to be acquired, and the PLC101 generates a high-frequency trigger pulse of 200KHz according to the action of starting the camera. The PLC101 transmits the high frequency trigger pulse to the light source controller, and after the light source controller receives the high frequency trigger pulse, controls the plurality of light sources 100 to emit light beams to the object to be detected. The line camera 200 then transmits the first image to the image processing apparatus 300, and the image processing apparatus 300 processes and analyzes the first image to determine whether the object to be measured is defective.
Please refer to the following tables 1-3, wherein the connection relationship between the PLC, the light source controller and the line camera can be shown in the following tables 1-3. Table 1 is the output pins of the PLC, and the output signals corresponding to the pins. Table 2 is the input pins and output pins of the light source controller, and the input signals corresponding to the input pins and the output signals corresponding to the output pins. Table 3 is a pin of the line camera and a corresponding description of the pin.
Table 1:
Figure BDA0003990539700000091
table 2:
Figure BDA0003990539700000092
table 3:
Figure BDA0003990539700000093
Figure BDA0003990539700000101
the present application also provides an image processing method, which is applied to the image processing apparatus 300, referring to fig. 12, the image processing apparatus 300 may perform steps S101 to S105,
s101: acquiring a first image of an object to be detected;
first, the image processing apparatus 300 may acquire a first image of an object to be detected through the image acquisition apparatus 200. The image capturing device 200 may be a line camera. Accordingly, the image processing apparatus 300 may acquire successive rows or successive columns of the linear scan image of the object 22 to be detected by the linear camera, and obtain the first image from the successive rows or successive columns of the linear scan image composition of the object 22 to be detected.
Further, the number of the image pickup devices 200 may be plural, and the resolution of each image pickup device 200 may be different, for example, there may be a line CCD with a resolution of 16K and a line CCD with a resolution of 8K. Meanwhile, by using the second bracket, the angle formed by the scanning line emitted by the linear array CCD with the resolution of 16K and the object 22 to be detected can be higher than the angle formed by the scanning line emitted by the linear array CCD with the resolution of 6K and the object 22 to be detected. Therefore, the image processing apparatus 300 may also acquire a first image having a first resolution and a first angle, or a first image having a second resolution and a second angle, wherein the first resolution is higher than the second resolution, and the first angle is higher than the second angle.
S102: processing a first image into a plurality of second images with different light source effects;
next, the image processing apparatus 300 processes each of the first images into a plurality of second images having different kinds of light source effects using image processing software.
S103: processing the plurality of second images into a plurality of corresponding third images in a preset format;
the image processing apparatus 300 processes the second images with different light source effects, and then processes each of the second images, that is, extracts characteristic information of the second images to form a third image with a preset format, for example, extracts different characteristic information of the second images by using image processing software Halcon to form a third image with a preset format, such as a glossiness image, an X-direction gradient image, a Y-direction gradient image, and a dust background image, respectively.
It can be understood that after the image processing apparatus 300 obtains the first image of the object to be detected sent by the image acquisition apparatus 200, the image processing software is utilized to split the first image into a plurality of second images with different light source effects, different defects of the object to be detected can be calculated according to the different second images, and then each second image is further processed, that is, the second image is processed into a third image with a set format, a defect area where the third image may occur is identified, whether the target area has defects is determined according to preset information, so as to detect whether the object to be detected 22 has various defects.
S104: identifying a target region of the third image;
the target area refers to an area where a defect may exist. After the image processing apparatus 300 processes the second image into the third image with the preset format, the area where the third image with the preset format may have a defect is calculated by using an algorithm in Halcon software. The algorithm in Halcon software includes various kinds of filtering, color and geometry, mathematical conversion, morphological calculation and analysis, correction, classification and identification, shape search and other related algorithms. By combining the algorithms, the areas of the object, which are obvious and possibly have defects, such as the length, the width, the gray scale and the curvature which do not meet the standards of the object to be detected, the places where the object to be detected has gaps, and the like can be identified.
Meanwhile, a defect detection model trained based on AI deep learning can be used for identifying the region of the third image with the preset format, wherein the region possibly has defects. The third image with the preset format is calculated through the defect detection model, and besides the obvious area possibly with the defect can be identified, the area possibly with the defect of the object to be detected, which is relatively fine, can be calculated.
It will be appreciated that the image processing apparatus may calculate the region of the object to be detected where the defect is likely to be apparent using an algorithm in Halcon software, and may calculate the region of the object to be detected where the defect is likely to be fine using a defect detection model. By combining the two algorithms, the region of the object to be detected, which is possibly defective, can be calculated more comprehensively.
S105: and determining whether the target area has defects according to preset information.
Finally, the image processing apparatus 300 compares the area with the preset information, and further determines whether the area with the possible defect has the defect. The preset information includes various information of the standard object to be detected, such as the length, width, gray scale, curvature, etc., of the standard object to be detected, and may be obtained from an upper computer or may be stored in the image processing apparatus 300 in advance. Meanwhile, after obtaining information about whether the object 22 to be detected has multiple defects and types of defects, the image processing apparatus 300 may process the first image of the object 22 to be detected into a curvature and a gloss with a preset size, and send the first image to the display terminal 500, so that the display terminal 500 displays the information about whether the object to be detected has multiple defects and types of defects.
Fig. 13 is a schematic structural diagram of an electronic device 10 according to an embodiment of the present application. In one embodiment, the electronic device 10 includes a memory 11 and at least one processor 12. It will be appreciated by those skilled in the art that the configuration of the electronic device 10 shown in fig. 4 is not limiting of the embodiments of the present application, and that the electronic device 10 may also include additional hardware or software, more or less than that shown, or a different arrangement of components.
As an alternative embodiment, the electronic device 10 includes a terminal capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. As an alternative embodiment, the memory 11 is used for storing program codes and various data. The Memory 11 may include Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disk Memory, tape Memory, or any other medium readable by a computer that can be used to carry or store data.
As an alternative embodiment, the at least one processor 12 may comprise an integrated circuit, for example, an integrated circuit that may comprise a single package, or may comprise a plurality of integrated circuits packaged with the same function or different functions, including a microprocessor, a digital processing chip, a combination of an image processor and various control chips, and the like. The at least one processor 12 is a Control Unit of the controller, and executes various functions of the electronic device 10 and processes data by executing or executing programs or modules stored in the memory 11, and calling data stored in the memory 11. The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a terminal, or a network device, etc.) or a processor (processor) to perform portions of the methods described in various embodiments of the present application. The memory 11 has stored therein program code, and the at least one processor 12 may invoke the program code stored in the memory 11 to perform related functions. In one embodiment of the present application, the memory 11 stores a plurality of instructions that are executed by the at least one processor 12 to implement the image processing method described above. Specifically, the description of the relevant steps in the specific implementation method of the above instruction by the at least one processor 12 is omitted herein.
The embodiment of the application also provides a storage medium. Wherein the storage medium has stored therein computer instructions which, when executed on a computing device, cause the computing device to perform the image processing method provided by the foregoing embodiment.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, apparatus and unit may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment. In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit, where the foregoing is merely a preferred embodiment and the technical principle applied in the present application.
It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, and that various obvious changes, modifications and substitutions may be made therein without departing from the scope of the invention. Therefore, although the present application has been described in detail by way of the above embodiments, the present invention is not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present invention, and falls within the scope of the present invention.

Claims (10)

1. An image processing method, characterized in that the processing method comprises:
acquiring a first image of an object to be detected;
processing the first image into a plurality of second images with different light source effects;
processing the plurality of second images into a plurality of corresponding third images in a preset format;
identifying a target region of the third image;
and determining whether the target area has defects according to preset information.
2. The image processing method according to claim 1, wherein the acquiring the first image of the object to be detected includes:
acquiring continuous multi-row or continuous multi-column linear array scanning images of the object to be detected;
And obtaining the first image according to the composition of the continuous multi-row or continuous multi-column linear array scanning images of the object to be detected.
3. The image processing method according to claim 1, wherein the first image has a first resolution and a first angle, or a second resolution and a second angle, wherein the first resolution is higher than the second resolution, and the first angle is higher than the second angle.
4. An image processing system, characterized in that the image processing system comprises a plurality of light sources, an image acquisition device and an image processing device,
the light sources are used for emitting a plurality of light beams to the object to be detected;
the image acquisition device comprises a channel for receiving reflected light of the light beams, wherein the channel forms different angles with a plurality of the light beams so as to form different light source effects;
the image acquisition device is used for acquiring a first image of the object to be detected;
the image processing device is used for processing one first image into a plurality of second images with different light source effects;
processing the plurality of second images into a plurality of corresponding third images in a preset format;
identifying a target region of the third image;
And determining whether the target area has defects according to preset information.
5. The image processing system according to claim 4, wherein the image processing device is further configured to acquire a continuous multi-row or continuous multi-column line scan image of the object to be detected;
and obtaining the first image according to the composition of the continuous multi-row or continuous multi-column linear array scanning images of the object to be detected.
6. The image processing system of claim 4, wherein the first image acquired by the image processing device has a first resolution and a first angle, or a second resolution and a second angle, wherein the first resolution is higher than the second resolution and the first angle is higher than the second angle.
7. The image processing system according to claim 4, further comprising a moving unit for controlling the object to be detected to perform multi-angle rotation.
8. The image processing system of claim 4, further comprising a display terminal for displaying whether the first image and the target area are defective.
9. An electronic device comprising a processor and a memory, the memory having stored therein a computer program which, when executed by the processor, implements the image processing method of any of claims 1-3.
10. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the image processing method as claimed in any one of claims 1-3.
CN202211585630.0A 2022-12-09 2022-12-09 Image processing method, image processing system and related equipment Pending CN116228636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211585630.0A CN116228636A (en) 2022-12-09 2022-12-09 Image processing method, image processing system and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211585630.0A CN116228636A (en) 2022-12-09 2022-12-09 Image processing method, image processing system and related equipment

Publications (1)

Publication Number Publication Date
CN116228636A true CN116228636A (en) 2023-06-06

Family

ID=86590017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211585630.0A Pending CN116228636A (en) 2022-12-09 2022-12-09 Image processing method, image processing system and related equipment

Country Status (1)

Country Link
CN (1) CN116228636A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117571723A (en) * 2024-01-16 2024-02-20 宁德时代新能源科技股份有限公司 Method and system for detecting battery welding slag

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117571723A (en) * 2024-01-16 2024-02-20 宁德时代新能源科技股份有限公司 Method and system for detecting battery welding slag

Similar Documents

Publication Publication Date Title
US6061086A (en) Apparatus and method for automated visual inspection of objects
US9410898B2 (en) Appearance inspection device, appearance inspection method, and program
US9983145B1 (en) Test probe card detection method and system thereof
EP1557876A1 (en) Probe mark reader and probe mark reading method
CN110441323B (en) Product surface polishing method and system
US20080175466A1 (en) Inspection apparatus and inspection method
JPS60219504A (en) Measuring device for height of circuit element on substrate
JP7151873B2 (en) inspection equipment
CN108445010B (en) Automatic optical detection method and device
JP5342413B2 (en) Image processing method
KR101630596B1 (en) Photographing apparatus for bottom of car and operating method thereof
CN116228636A (en) Image processing method, image processing system and related equipment
CN114813761B (en) Double-light-stroboscopic-based film pinhole and bright spot defect identification system and method
CN116256366A (en) Chip defect detection method, detection system and storage medium
KR101522312B1 (en) Inspection device for pcb product and inspecting method using the same
CN117471392B (en) Method and system for detecting probe tip, electronic equipment and storage medium
JP5336325B2 (en) Image processing method
CN211403010U (en) Foreign body positioning device for display panel
CN113763322A (en) Pin Pin coplanarity visual detection method and device
JP4581424B2 (en) Appearance inspection method and image processing apparatus
KR20210131695A (en) Data generation device and method for led panel defect detection
JP2000146787A (en) Measuring method for restriction in tensile test
KR20190108805A (en) Vision inspection apparatus and method to inspect defect of target object
US20230138331A1 (en) Motion in images used in a visual inspection process
CN211697564U (en) Optical filter detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination