CN117635597A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117635597A
CN117635597A CN202311783877.8A CN202311783877A CN117635597A CN 117635597 A CN117635597 A CN 117635597A CN 202311783877 A CN202311783877 A CN 202311783877A CN 117635597 A CN117635597 A CN 117635597A
Authority
CN
China
Prior art keywords
region
point
detection point
detection
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311783877.8A
Other languages
Chinese (zh)
Inventor
王硕
董其波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Mega Technology Co Ltd
Original Assignee
Suzhou Mega Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Mega Technology Co Ltd filed Critical Suzhou Mega Technology Co Ltd
Priority to CN202311783877.8A priority Critical patent/CN117635597A/en
Publication of CN117635597A publication Critical patent/CN117635597A/en
Pending legal-status Critical Current

Links

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses an image processing method and device, electronic equipment and a storage medium. The image processing method comprises the following steps: determining an object area of a measured object in an image to be measured; dividing the object region by using the gray value of each pixel point in the object region to obtain a first region with the gray value in a first interval and a second region with the gray value in a second interval; respectively extracting a first detection point and a second detection point at the corresponding positions of the first area and the second area; and determining whether the tested object has defects or not by utilizing the difference of the first detection point and the second detection point in the aspect of image characteristics. The invention can detect the defects of the detected object by using the distribution characteristics of the detected object and adopting the detection points positioned in different areas, thereby not only reducing the labor cost, but also improving the accuracy.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The appearance defect of the chip can directly influence the quality of the product and the smooth proceeding of the subsequent process links of IC manufacture. Therefore, after the chip completes the packaging process, a test procedure needs to be performed on the chip. The detection link mainly carries out appearance detection on the packaged chip, and the quality of the chip can be ensured only through the detection link and detection without errors. Therefore, chip visual inspection has an important role in the IC manufacturing industry. For example, a Ball Grid Array (BGA) chip is one of many chips, and the appearance quality of the Ball-mounting process directly affects the performance and reliability of the chip. However, in the production process of the chip, the solder balls may have defects such as deformation and breakage. Therefore, it is necessary to perform defect inspection on solder balls on the BGA chip before shipment to ensure that the BGA chip is defect-free.
At present, most factories detect the appearance of a chip by manual detection, and the detection mode has low efficiency and is greatly influenced by subjective factors, so that the quality requirements of the chip can not be met. With the layering of various image processing algorithms, an image processing method can be used for detecting the appearance of a chip in the related technology, but the existing defect detection method has the problems of long time consumption, complex calculation and the like, and is not suitable for use in a use scene to be detected in real time.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, electronic equipment and a storage medium, which are used for at least solving the technical problems.
According to one aspect of the present invention, the present invention provides an image processing method, including: determining an object area of a measured object in an image to be measured; dividing the object region by using the gray value of each pixel point in the object region to obtain a first region with the gray value in a first interval and a second region with the gray value in a second interval; respectively extracting a first detection point and a second detection point at the corresponding positions of the first area and the second area; and determining whether the tested object has defects or not by utilizing the difference of the first detection point and the second detection point in the aspect of image characteristics.
Illustratively, determining the object region of the object under test in the image under test includes: determining a region of interest in the image to be measured by using a template image for the object to be measured and the position of the region of interest in the template image; and extracting an object region in the image to be detected by using the gray value of each pixel point of the region of interest in the image to be detected.
Illustratively, dividing the object region into a first region and a second region different from the first region using the distribution characteristics of the object to be measured includes: fitting the object region by using a contour fitting mode, and determining an object contour of the object to be detected and a center point of the object contour;
determining a first distance and a second distance from the center point by using gray values of all pixel points in the object area;
a first region contour of the same shape as the object contour at a first distance from the center point and a second region contour of the same shape as the object contour at a second distance from the center point are generated to determine a first region defined by the first region contour and a second region defined by the second region contour.
The first detection point and the second detection point are points which are in the same extension line direction as the central point and intersect the first area contour and the second area contour respectively.
The first detection point is a detection point located in the first area at a first distance from the center point, and the second detection point is a detection point located in the second area at a second distance from the center point and on an extension line of the center point and the first detection point.
Illustratively, the object under test comprises a solder ball on a BGA chip, in which case the object outline is circular, and the center point refers to the center of the solder ball.
The first region and the second region refer to any two different regions of an inner region, a central region or an outer region of the solder ball, wherein the inner region is a region with a distance from a center point smaller than a first threshold value, the outer region is a region with a distance from the center point larger than a second threshold value, the central region is a region with a distance from the center point larger than the first threshold value smaller than a second threshold value, wherein the first threshold value is smaller than a distance from the center point to the object contour, and the second threshold value is larger than a distance from the center point to the object contour.
Illustratively, determining whether the object under test is defective using the difference in image characteristics between the first detection point and the second detection point includes:
determining whether the gray value difference value of the first detection point and the second detection point exceeds a difference value threshold;
and if the difference threshold is exceeded, determining that the tested object has defects.
Illustratively, determining whether the object under test is defective using the difference in image characteristics between the first detection point and the second detection point includes: determining whether the gray value change rate of the first detection point and the second detection point exceeds a change rate threshold; and if the change rate threshold is exceeded, determining that the tested object has defects.
According to an aspect of the present invention, there is provided an image processing apparatus comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the foregoing methods.
According to one aspect of the present invention, there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the aforementioned method.
The above at least one technical scheme adopted by the embodiment of the invention can achieve the following beneficial effects:
the invention adopts the image processing method to detect whether the detected object has defects, reduces labor cost, and further, can utilize the distribution characteristics of the detected object to detect the defects by adopting the detection points positioned in different areas, thereby improving accuracy.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 is a schematic diagram showing a BGA chip according to an exemplary embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a solder ball according to an exemplary embodiment of the present invention;
fig. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a chip outline according to an exemplary embodiment of the invention;
fig. 5 shows a schematic configuration of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to specific embodiments of the present invention and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In embodiments of the present invention, "/" may indicate that the associated object is an "or" relationship, e.g., A/B may represent A or B; "and/or" may be used to describe that there are three relationships associated with an object, e.g., a and/or B, which may represent: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In order to facilitate description of the technical solution of the embodiments of the present invention, in the embodiments of the present invention, the words "first", "second", etc. may be used to distinguish between technical features that are the same or similar in function. The terms "first," "second," and the like do not necessarily denote any order of quantity or order of execution, nor do the terms "first," "second," and the like. In embodiments of the invention, the words "exemplary" or "such as" are used to mean examples, illustrations, or descriptions, and any embodiment or design described as "exemplary" or "such as" should not be construed as preferred or advantageous over other embodiments or designs. The use of the word "exemplary" or "such as" is intended to present the relevant concepts in a concrete fashion to facilitate understanding.
In addition, specific details are set forth in the following description in order to provide a better understanding of the present invention. It will be understood by those skilled in the art that the present invention may be practiced without some of these specific details. In some instances, well known methods, procedures, components, and circuits have not been described in detail so as not to obscure the present invention.
The image processing method of the exemplary embodiment of the present invention is applicable to, but not limited to, an appearance detection link of a packaged chip. The package referred to in the present invention refers to a housing for mounting a semiconductor integrated circuit chip, which not only plays a role in mounting, fixing, sealing, protecting the chip and enhancing electrothermal performance, but also serves as a bridge for communicating the world inside the chip with external circuits: the connection points on the chip are connected to pins of the package housing with wires, which in turn establish connections with other devices through wires on the printed board, so that the package has a great influence on the performance of the chip. Further, the packaged chip of the exemplary embodiment of the present invention refers to a chip including the above package structure/process. Different packaging technologies correspond to different packaged chips, which may include, for example, dual In-line Package (DIP) chips, quad flat Package (Quad Flat Package) chips, ball Grid Array (BGA) chips, and the like.
The image processing method of the invention can partition the packaged chip according to the distribution characteristics (such as symmetry and the like) of the packaged chip, and then determine whether the packaged chip has defects or not by utilizing the characteristic differences of the detection points in different areas. The various exemplary embodiments of the present invention will be described in detail below with reference to the drawings. For ease of understanding, the invention will be specifically described with reference to a BGA chip.
Fig. 1 is a schematic diagram illustrating a chip according to an exemplary embodiment of the present invention. As shown in fig. 1, the I/O terminals of the BGA chip are distributed under the package in an array of circular solder balls. As shown in fig. 1, a plurality of spherical solder balls are distributed on the circuit board. The size and distribution shape of solder balls corresponding to different products are different, and the invention is not limited to this. In practice, these solder balls may have defects such as dirt or scratches, and thus it is necessary to detect the defects using the image processing method of the exemplary embodiment of the present invention.
In implementation, the embodiment of the invention can shoot the BGA chip from top to bottom through the shooting device, that is, shoot the upper surface of the BGA chip, so as to acquire the chip image. The shooting device can be a two-dimensional camera or a three-dimensional camera, and the invention is not limited to the two-dimensional camera or the three-dimensional camera. When the photographing device is located above the BGA chip and photographs the BAG chip in a top view, the solder ball is located at a middle position of the photographed image. As shown in fig. 2, in the image photographed for the BGA chip, the solder balls are approximately circular and located in the middle of the image. The peripheral area of the solder ball area is white and the middle portion is black due to photographing light or the like.
An image processing method according to an exemplary embodiment of the present invention will be described in detail below with reference to fig. 3, and fig. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present invention.
In step S310, an object region of the object to be measured in the image to be measured is determined. The image to be measured mentioned here may indicate an image taken for detecting the object to be measured. In implementation, the image to be measured may be an image photographed in real time by a photographing device or an image transmitted from an external or internal storage device. As an exemplary embodiment, the image to be measured may be an image photographed for various types of chips, for example, the image to be measured may be an image photographed for a BGA chip, in which case the object to be measured indicates a solder ball.
As an exemplary embodiment, the present invention may determine offset information of an image to be measured compared with a template image by performing template matching with the image to be measured with respect to the template image of the image to be measured. The template image is an image of a target to be detected, and an area of interest for the image to be detected is calibrated in the template image, wherein the area of interest refers to an area to be processed. For example, in the image taken for the BGA chip of fig. 1, the region of interest is a solder ball region of the pointer to each solder ball. The present invention can then use the offset information and the location of the region of interest in the template image to determine the region of interest (i.e., the object region mentioned later) in the image under test.
The above operations can generally determine the position of the object region, and then, after determining the region of interest, the present invention can further extract the object region to be measured by using the gray values of the pixels in the region of interest. For example, as apparent from fig. 2, the gray values of the solder ball region and the background region are different, so that the present invention can extract the region having the gray value within the predetermined range, i.e., the solder ball region.
And then, processing the measured object area by using a contour fitting mode to determine the object contour of the object to be measured. The contour fitting mode is to fit the measured object area by utilizing the edge information of the image and the fitted contour shape, so as to determine the contour of the object.
Taking a BGA chip as an example, in the case where it is determined that a solder ball region is circular, a solder ball contour for a solder ball can be fitted using edge information of the solder ball region determined by a gray value and a fitted shape (circular). It should be noted that during the fitting process, representative information representing the solder ball area, i.e., the center and radius of the solder ball outline, may be acquired simultaneously. The fitting shapes are different, and the representing information representing the solder balls is different. For example, in the case where the fitted outline is rectangular, the representative information may include the center, height, width, and rotation angle of the smallest circumscribed rectangle.
In the invention, in order to detect whether the detected object has defects, the detected object is required to be partitioned by utilizing the distribution characteristics of the detected object, and then the detected object is required to have defects by utilizing the characteristic points which are in different areas and correspond to each other. Correspondingly, the invention executes step S320 to divide the target area by using the gray value of each pixel point in the target area, and obtain a first area with the gray value in a first section and a second area with the gray value in a second section.
In particular, the partitioning process of the measured object is related to the distribution characteristics of the measured object, and in brief, the exemplary embodiment of the present invention is to partition pixels having approximately the same gray value in the measured object into one area in combination with the distribution characteristics of the measured object. That is, the gray values of the pixel points within one region are approximately the same or within the same gray value interval. As an exemplary embodiment, in the case where the object to be measured is symmetrical, the partitioning of the object to be measured may be to divide the object to be measured into sub-regions symmetrical to each other. In the case that the object to be measured includes a plurality of sub-objects, the partitioning of the object to be measured may be to divide the object to be measured into sub-object areas respectively corresponding to the plurality of sub-objects. As an exemplary embodiment, the invention may also perform scaling on the measured object according to different detection requirements, so as to obtain different sub-regions.
For example, if the shape of the object to be measured is rectangular, the object to be measured may be divided into a plurality of sub-areas having the same area. For example, when the shape of the object to be measured is a circle, the circle center of the object to be measured may be defined as a center, and a plurality of sub-areas may be defined as different radii. For another example, in the case where the shape of the object to be measured is a circle, the object to be measured may be divided into sector areas of equal size.
In the case that the shape of the object to be measured is a circle, the gray values of the pixel points on the circumference at the same distance from the center of the circle can be considered to be approximately the same, so that the invention can determine the center point of the outline of the object first, then generate the first area outline and the second area outline with the same shape as the outline by utilizing the center point and the different distances from the center point, and determine the first area corresponding to the first area outline and the second area corresponding to the second area outline.
In the above case, the first region and the second region may indicate any two different regions of an inner region, a central region, or an outer region of the object under test, wherein the inner region is a region in which a distance from a center point is smaller than a first threshold value, the outer region is a region in which a distance from a center point is larger than a second threshold value, the central region is a region in which a distance from a center point is larger than the first threshold value and smaller than a second threshold value, wherein the first threshold value is smaller than a distance from a center point to an outline of the object, and the second threshold value is larger than a distance from a center point to the outline of the object.
For convenience of description, how to partition the solder balls and determine the pairs of detection points will be described in detail with reference to fig. 4. As shown in fig. 4, the pad is centered on the center 40, and the image may be divided into a center region, an inner region, and an outer region using radii R1 and R2. As shown in fig. 4, the contour 401 corresponding to the radius R2 is the object contour mentioned in step S310. The region 410 outside the contour 401, i.e. the region corresponding to a radius greater than R2, is referred to as the outer region. On this basis, the region 410 may include an inner region 420 and a central region 430.
Specifically, the center area 430 refers to a shadow area limited by the camera in fig. 1. In practice, the present invention may utilize the determined center 40 and the extracted area from the range of pre-set gray values to determine the profile 402. The area defined by the outline 402 is the central area 430. The area between profile 401 and profile 402 is an interior area 420. It should be noted that the above-mentioned central, inner and outer regions are merely descriptive and not limiting. As an example, in the case where the radius R2 is 1, R1 may be set to 0.7, if a certain pixel is located at a distance of 0.2 from the center of the circle, the pixel is located in the central area, and if a certain pixel is located at a distance of 1.3 from the center of the circle, the pixel is located in the outer area.
In the case that the first area and the second area have been acquired, step S330 may be performed to extract the first detection point and the second detection point at the corresponding positions of the first area and the second area, respectively. The detection point pair is a pixel point for detecting whether the detected object has a defect.
For convenience of description, the first detection point and the second detection point may also be referred to as a pair of detection points for detecting the object under test, wherein the first detection point is located in the first area and the second detection point is located in the second area. It should be noted that, in order to be able to accurately detect defects, the detection points are located at corresponding positions of different areas, that is, the first detection point and the second detection point are detection points extracted in the same extraction manner in different areas.
As an exemplary embodiment, the pair of detection points may be a pair of detection points that are in the same extension line direction as the center point and are respectively in the first area and the second area. For example, the first detection point is a pixel point located in the first area and located at a first distance from the center point, and the second detection point is a pixel point located in the second area and located at a second distance from the center point on an extension line defined by the center point and the first detection point.
Referring to fig. 4, the detection points may include detection point 41, detection point 43, and detection point 45. As can be seen from fig. 4, the detection points 41, 43 and 45 are points on the same extension line and in different areas. Specifically, the distance between the detection point 41 and the center 40 is D1, and the detection point 41 is located in the central area 430. The distance between the detecting point 43 and the center 40 is D2, the detecting point 43 is located in the inner area 420, the distance between the detecting point 45 and the center is D3, and the detecting point 45 is located in the outer area 410. In order to perform the detection operation, the present invention may select any two points from the detection points 41, 43, and 45 as the detection point pair.
As another exemplary embodiment, the pair of detection points may be points that are in the same extension line direction as the center point and intersect the first area profile and the second area profile. For example, the first detection point is a point located on the area outline of the first area, and the second detection point is a point located on the extension line defined by the center point and the first detection point and located on the area outline of the second area.
Referring to fig. 4, the detection points may include detection point 42, detection point 44, and detection point 49. As can be seen from fig. 4, the detection points 42, 44 and 49 are points on the same extension and in different areas. Specifically, the detection points 42 are located on the area contour of the central area 430, the detection points 44 are located on the area contour of the inner area 420, and the detection points 49 are located on the area contour of the outer area 410. In order to perform the detection operation, the present invention may select any two points from the detection points 42, 44, and 49 as the detection point pair.
As an alternative embodiment, the present invention may also select a plurality of detection point pairs to perform the detection operation, wherein each detection point pair of the plurality of detection point pairs is determined in the same manner. Referring to fig. 4, the present invention may also select a detection point 46 from center area 430 that is spaced from center 40 by a distance D1, then select a detection point 47 from inner area 420 that is spaced from D2 by an extension determined by center 40 and detection point 46, and select a detection point 48 from outer area 410 that is spaced from D3.
In practice, the invention can also determine different detection areas according to detection requirements, thereby determining different detection point pairs. As an exemplary embodiment, the detection requirements include an internal detection and/or an external detection, and referring to fig. 4, in the case where the detection requirements are internal detection, the first region is an internal region 420 and the second region is a central region 430. In the case where the detection requirement is an external detection, the first region is the central region 430 and the second region is the external region 410.
After the detection point pair is found in step S330, step S340 may be executed to determine whether the detected object has a defect by using the difference between the first detection point and the second detection point in terms of image features. In practice, the present invention may use the difference in image characteristics of the detection point pairs to determine whether the object under test is defective. For example, the present invention can determine whether the object to be tested has a defect by using the difference in gray value of the detection point pair. For another example, the present invention may determine whether the object to be tested has a defect using a rate of change of the detection point pair in gray value.
As an exemplary embodiment, determining whether the object to be tested is defective using the pair of detection points may include: determining whether the gray value difference of the detection point pair exceeds a difference threshold; and if the difference threshold is exceeded, determining that the tested object has defects.
Referring to fig. 4, in case that the detection request is an internal detection, the detection point pair includes a detection point 41 and a detection point 43, a difference between the gray value corresponding to the detection point 41 and the gray value corresponding to the detection point 43 may be determined, and then the difference is compared with a predetermined difference threshold. It should be noted that the difference threshold may be a difference predetermined by a technician using a priori information, and the present invention is not limited thereto. In order to obtain more accurate detection results, the invention can also utilize a plurality of groups of detection points to compare the average difference value on the gray value with a difference value threshold value so as to determine whether defects exist. Alternatively, the present invention may also compare each gray level difference value of the plurality of pairs of detection points with a difference threshold value, and then determine whether a defect exists according to the number of pairs of detection points exceeding the difference threshold value. As an example, the present invention may select 50 sets of corresponding first detection points and second detection points to determine whether a defect exists.
Referring to fig. 4, in case that the detection request is an external detection, the detection point pair includes a detection point 43 and a detection point 45, a difference between the gray value corresponding to the detection point 43 and the gray value corresponding to the detection point 45 may be determined, and then compared with a predetermined difference threshold. It should be noted that the difference threshold may be a difference predetermined by a technician using a priori information, and the present invention is not limited thereto. In order to obtain more accurate detection results, the invention can also utilize a plurality of groups of detection points to compare the average difference value on the gray value with a difference value threshold value so as to determine whether defects exist. Alternatively, the present invention may also compare each gray level difference value of the plurality of pairs of detection points with a difference threshold value, and then determine whether a defect exists according to the number of pairs of detection points exceeding the difference threshold value.
In addition, the invention can also calculate the variance by using the gray value difference value of each group of detection point pairs after the gray value difference value of each group of detection point pairs is obtained. And if the variance is higher than the set value, determining that the tested object has defects.
As an exemplary embodiment, determining whether the object under test has a defect using the pair of detection points includes: determining whether the pixel point change rate exceeds a change rate threshold; and if the change rate threshold is exceeded, determining that the tested object has defects.
Referring to fig. 4, in the case where the detection request is an internal detection, the detection point pair includes the detection point 41 and the detection point 43, a difference between the gradation value corresponding to the detection point 41 and the gradation value corresponding to the detection point 43 may be determined, and then a ratio of the difference to the detection point 43 is taken as a change rate. And comparing the change rate with a predetermined change rate threshold, and if the change rate threshold is exceeded, determining that the detected object has defects, wherein the change rate threshold is a value predetermined by a technician through prior information, and the invention is not limited. In order to obtain more accurate detection results, the invention can also utilize a plurality of groups of detection points to compare the average value of the change rate on the gray value with the change rate threshold value so as to determine whether defects exist. Alternatively, the present invention may also compare each rate of change in the plurality of pairs of detection points to a rate of change threshold and then determine whether a defect exists based on the number of pairs of detection points exceeding the rate of change threshold.
Referring to fig. 4, in the case where the detection request is an external detection, the detection point pair includes a detection point 43 and a detection point 45, a difference between the gray value corresponding to the detection point 43 and the gray value corresponding to the detection point 45 may be determined, and then a ratio of the difference to the detection point 43 is taken as a change rate. And comparing the change rate with a predetermined change rate threshold, and if the change rate threshold is exceeded, determining that the detected object has defects, wherein the change rate threshold is a value predetermined by a technician through prior information, and the invention is not limited. In order to obtain more accurate detection results, the invention can also utilize a plurality of groups of detection points to compare the average value of the change rate on the gray value with the change rate threshold value so as to determine whether defects exist. Alternatively, the present invention may also compare each rate of change in the plurality of pairs of detection points to a rate of change threshold and then determine whether a defect exists based on the number of pairs of detection points exceeding the rate of change threshold.
In addition, the invention can also calculate the variance by using the change rates of each of the plurality of detection point pairs after the change rates are obtained. And if the variance is higher than the set value, determining that the tested object has defects.
The present invention also provides a schematic structural diagram of an image processing apparatus 500 as shown in fig. 5, the apparatus 500 comprising the following units:
the area determining unit 510 is configured to determine an object area of the object to be measured in the image to be measured.
The partitioning unit 520 divides the target area by using the gray value of each pixel in the target area, and obtains a first area with the gray value in the first section and a second area with the gray value in the second section.
The detection point extraction unit 530 is configured to extract a first detection point and a second detection point at corresponding positions of the first area and the second area, respectively.
The defect detecting unit 540 is configured to determine whether the object to be detected has a defect by using the difference between the first detecting point and the second detecting point in terms of image characteristics.
Alternatively, the region determining unit 510 may determine the region of interest in the image to be measured using a template image for the object to be measured and a position of the region of interest in the template image; and extracting an object region in the image to be detected by using the gray value of each pixel point of the region of interest in the image to be detected.
Optionally, the partition unit 520 may fit the object region by using a contour fitting manner, to determine an object contour of the object to be measured and a center point of the object contour;
determining a first distance and a second distance from the center point by using gray values of all pixel points in the object area;
a first region contour of the same shape as the object contour at a first distance from the center point and a second region contour of the same shape as the object contour at a second distance from the center point are generated to determine a first region defined by the first region contour and a second region defined by the second region contour.
Optionally, the first detection point and the second detection point are points which are in the same extension line direction as the central point and intersect the first area contour and the second area contour respectively.
Optionally, the first detection point is a detection point located in the first area at a first distance from the central point, and the second detection point is a detection point located in the second area at a second distance from the central point and on an extension line of the central point and the first detection point.
Optionally, the object to be tested includes a solder ball on a BGA chip, in which case the object outline is circular, and the center point refers to the center of the solder ball.
Optionally, the first region and the second region refer to any two different regions of an inner region, a central region or an outer region of the solder ball, wherein the inner region is a region with a distance from a center point smaller than a first threshold value, the outer region is a region with a distance from the center point larger than a second threshold value, the central region is a region with a distance from the center point larger than the first threshold value smaller than the second threshold value, the first threshold value is smaller than a distance from the center point to the object contour, and the second threshold value is larger than a distance from the center point to the object contour.
Optionally, the defect detecting unit 540 may determine whether the gray value difference between the first detecting point and the second detecting point exceeds a difference threshold; and if the difference threshold is exceeded, determining that the tested object has defects.
Alternatively, the defect detecting unit 540 may determine whether the gray value change rate of the first detecting point and the second detecting point exceeds the change rate threshold; and if the change rate threshold is exceeded, determining that the tested object has defects.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described image processing method.
In practical applications, the computer-readable storage medium may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium.
The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The apparatus embodiments described above are merely illustrative, wherein elements illustrated as separate elements may or may not be physically separate, and elements illustrated as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is only one specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (11)

1. An image processing method, comprising:
determining an object area of a measured object in an image to be measured;
dividing the object region by using the gray value of each pixel point in the object region to obtain a first region with the gray value in a first interval and a second region with the gray value in a second interval;
extracting a first detection point and a second detection point from the corresponding positions of the first area and the second area;
and determining whether the tested object has defects or not by utilizing the difference of the first detection point and the second detection point in the aspect of image characteristics.
2. The method of claim 1, wherein determining the object region of the object under test in the image under test comprises:
determining a region of interest in the image to be measured by using a template image for the object to be measured and the position of the region of interest in the template image;
and extracting an object region in the image to be detected by using the gray value of each pixel point of the region of interest in the image to be detected.
3. The method of claim 1, wherein dividing the object region with gray values for each pixel point in the object region, obtaining a first region having gray values in a first interval and a second region having gray values in a second interval comprises:
fitting the object region by using a contour fitting mode, and determining an object contour of the object to be detected and a center point of the object contour;
determining a first distance and a second distance from the center point by using gray values of all pixel points in the object area;
a first region contour of the same shape as the object contour at a first distance from the center point and a second region contour of the same shape as the object contour at a second distance from the center point are generated to determine a first region defined by the first region contour and a second region defined by the second region contour.
4. A method according to claim 3, wherein the first and second detection points are points which are in the same direction of extension as the central point and intersect the first and second region profiles, respectively.
5. The method of claim 4, wherein the first detection point is a detection point located a first distance from the center point in a first area, and the second detection point is a detection point located a second distance from the center point in a second area on an extension of the center point from the first detection point.
6. A method according to any one of claims 1 to 5, wherein the object under test comprises a solder ball on a BGA chip, in which case the object outline is circular and the centre point is the centre of the solder ball.
7. The method of claim 6, wherein the first region and the second region refer to any two different regions of an inner region, a center region, or an outer region of the solder ball, wherein the inner region is a region having a distance from a center point less than a first threshold, the outer region is a region having a distance from a center point greater than a second threshold, the center region is a region having a distance from a center point greater than the first threshold less than a second threshold, wherein the first threshold is less than a distance from the center point to the object contour, and the second threshold is greater than a distance from the center point to the object contour.
8. The method of claim 1, wherein determining whether the object under test is defective using the difference in image characteristics between the first detection point and the second detection point comprises:
determining whether the gray value difference value of the first detection point and the second detection point exceeds a difference value threshold;
and if the difference threshold is exceeded, determining that the tested object has defects.
9. The method of claim 1, wherein determining whether the object under test is defective using the difference in image characteristics between the first detection point and the second detection point comprises:
determining whether the gray value change rate of the first detection point and the second detection point exceeds a change rate threshold;
and if the change rate threshold is exceeded, determining that the tested object has defects.
10. An image processing apparatus comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-9.
11. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-9.
CN202311783877.8A 2023-12-22 2023-12-22 Image processing method and device, electronic equipment and storage medium Pending CN117635597A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311783877.8A CN117635597A (en) 2023-12-22 2023-12-22 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311783877.8A CN117635597A (en) 2023-12-22 2023-12-22 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117635597A true CN117635597A (en) 2024-03-01

Family

ID=90026994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311783877.8A Pending CN117635597A (en) 2023-12-22 2023-12-22 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117635597A (en)

Similar Documents

Publication Publication Date Title
US9704232B2 (en) Stereo vision measurement system and method
US11615552B2 (en) Crossing point detector, camera calibration system, crossing point detection method, camera calibration method, and recording medium
US20220198712A1 (en) Method for adaptively detecting chessboard sub-pixel level corner points
JP2017515097A (en) Automatic in-line inspection and measurement using projected images
CN104634242A (en) Point adding system and method of probe
US20180278911A1 (en) Apparatus and method for three-dimensional inspection
US20220148153A1 (en) System and method for extracting and measuring shapes of objects having curved surfaces with a vision system
TW201326735A (en) Method and system for measuring width
JP2002213929A (en) Method and device for three-dimensional visual inspection of semiconductor package
CN116563292B (en) Measurement method, detection device, detection system, and storage medium
CN113567451A (en) Cable defect detection and diameter measurement method
CN117218062A (en) Defect detection method and device, electronic equipment and storage medium
JP2002243656A (en) Method and equipment for visual inspection
CN117635597A (en) Image processing method and device, electronic equipment and storage medium
CN107767372B (en) Chip pin online visual detection system and method for layered parallel computing
US8102516B2 (en) Test method for compound-eye distance measuring apparatus, test apparatus, and chart used for the same
CN118071677A (en) Image processing method and device, electronic equipment and storage medium
CN115511718A (en) PCB image correction method and device, terminal equipment and storage medium
WO2017116027A1 (en) Visual inspecting method
CN116503387B (en) Image detection method, device, equipment, system and readable storage medium
CN110532840A (en) A kind of deformation recognition methods, device and the equipment of square objects
CN116739973A (en) Shoe length determining method, device, equipment and storage medium
CN117557619B (en) Wafer image size determining method, device, computer equipment and storage medium
CN220154308U (en) Chip appearance detection device based on grating projection
Zhu et al. Automatic indication recognition of dual pointer meter in thermo-hygrometer calibrator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination