CN117690142A - Wafer character preprocessing method, device and storage medium - Google Patents

Wafer character preprocessing method, device and storage medium Download PDF

Info

Publication number
CN117690142A
CN117690142A CN202410144365.5A CN202410144365A CN117690142A CN 117690142 A CN117690142 A CN 117690142A CN 202410144365 A CN202410144365 A CN 202410144365A CN 117690142 A CN117690142 A CN 117690142A
Authority
CN
China
Prior art keywords
image
character
segmentation
area
segmentation image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410144365.5A
Other languages
Chinese (zh)
Other versions
CN117690142B (en
Inventor
王蒙蒙
易佳朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ait Precision Technology Co ltd
Original Assignee
Shenzhen Ait Precision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ait Precision Technology Co ltd filed Critical Shenzhen Ait Precision Technology Co ltd
Priority to CN202410144365.5A priority Critical patent/CN117690142B/en
Publication of CN117690142A publication Critical patent/CN117690142A/en
Application granted granted Critical
Publication of CN117690142B publication Critical patent/CN117690142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Character Input (AREA)

Abstract

The application discloses a preprocessing method, equipment and storage medium of wafer characters, wherein the method comprises the following steps: acquiring a character dark region segmentation image and a character bright region segmentation image of an original image, wherein the original image contains wafer characters to be processed; respectively filtering the character dark region segmentation image and the character bright region segmentation image to obtain a character segmentation image on a dark region and a character segmentation image under a bright region, which are respectively corresponding to each other; fusing the character segmentation image in the dark area with the character segmentation image in the bright area to obtain a fused image; and identifying the wafer character to be processed based on the fusion image to obtain an identification result of the wafer character to be processed. The method and the device can improve the accuracy of wafer character recognition.

Description

Wafer character preprocessing method, device and storage medium
Technical Field
The present disclosure relates to the field of character detection technologies, and in particular, to a method and an apparatus for preprocessing a wafer character, and a storage medium.
Background
In general, character detection and recognition is facilitated if the character area of the image is illumination stable. However, when the illumination of the character area is uneven, the detection and recognition accuracy is extremely lowered.
At present, in order to solve the problem that the accuracy of detection and recognition is reduced due to uneven illumination of a character area, an image is generally enhanced to obtain an enhanced image, and then detection and recognition are performed based on the enhanced image, but the result recognized by the mode still has a larger difference with an actual character.
Disclosure of Invention
The application provides a preprocessing method, preprocessing equipment and a storage medium for wafer characters, which can improve the accuracy of wafer character recognition.
In a first aspect, the present application provides a method for preprocessing a wafer character, the method comprising:
acquiring a character dark region segmentation image and a character bright region segmentation image of an original image, wherein the original image contains wafer characters to be processed;
respectively filtering the character dark region segmentation image and the character bright region segmentation image to obtain a character segmentation image on a dark region and a character segmentation image under a bright region, which are respectively corresponding to each other;
fusing the character segmentation image in the dark area with the character segmentation image in the bright area to obtain a fused image;
and identifying the wafer character to be processed based on the fusion image to obtain an identification result of the wafer character to be processed.
The further technical scheme is that the method for acquiring the character dark area segmentation image of the original image comprises the following steps:
acquiring a wafer edge segmentation image and a character region segmentation image of an original image;
and obtaining a character dark region segmentation image based on the wafer edge segmentation image and the character region segmentation image.
The further technical scheme is that the method for acquiring the wafer edge segmentation image and the character region segmentation image of the original image comprises the following steps:
performing global constant value segmentation on the original image according to a first preset threshold value to obtain a wafer edge segmentation image;
based on the wafer edge segmentation image, obtaining the outline of the wafer edge;
obtaining an initial boundary of a character area based on the outline of the wafer edge;
obtaining a first mask image of the original image based on the initial boundary of the character area;
based on the first mask image, a character region segmentation image of the original image is obtained.
The further technical scheme is that the method for obtaining the initial boundary of the character area based on the outline of the wafer edge comprises the following steps:
obtaining the lower boundary of the wafer edge based on the leftmost point position and the rightmost point position of the contour;
obtaining the upper and lower boundaries of the character area based on the lower side edges;
obtaining initial left and right boundaries of a region corresponding to the character of the wafer to be processed based on the leftmost point position and the rightmost point position of the outline of the edge of the wafer;
the upper and lower boundaries and the initial left and right boundaries of the character area are used as initial boundaries of the character area.
The further technical scheme is that the method for obtaining the character dark region segmentation image based on the wafer edge segmentation image and the character region segmentation image comprises the following steps:
obtaining an initial boundary of a dark area based on the wafer edge segmentation image and the character area segmentation image;
obtaining a second mask image of the original image based on the initial boundary of the dark area;
acquiring a second gray level mean value and a second standard deviation of the original image in a preset area of a second mask image;
obtaining a third threshold value based on the second gray level mean value and the second standard deviation, and performing global threshold segmentation on the original image according to the third threshold value to obtain a third initial segmented image;
and obtaining a character dark region segmentation image based on the third initial segmentation image and the second mask image.
The further technical scheme is that the method for obtaining the initial boundary of the dark area based on the wafer edge segmentation image and the character area segmentation image comprises the following steps:
dividing the image based on the edge of the wafer to obtain the upper and lower boundaries of the dark region;
dividing the image based on the character area to obtain left and right boundaries of the dark area;
the upper and lower boundaries and the left and right boundaries of the dark region are taken as the initial boundaries of the dark region.
The further technical scheme is that the method for acquiring the character bright region segmentation image comprises the following steps:
dividing the image based on the character dark area to obtain a fine boundary of the character dark area;
generating a third mask image according to the fine boundary of the dark character area and the preset character height;
obtaining a mask image of a character bright area based on the third mask image and the second mask image;
carrying out convolution processing on the original image to obtain a convolution image;
based on the convolution image and the original image, obtaining an enhanced image of the original image;
acquiring a third gray average value and a third standard deviation of the enhanced image in a preset area of the mask image of the character bright area;
and obtaining a fourth threshold value based on the third gray average value and the third standard deviation, and performing global threshold segmentation on the enhanced image according to the fourth threshold value to obtain a character bright region segmented image.
The further technical scheme is that global threshold segmentation is carried out on the enhanced image according to a fourth threshold value to obtain a character bright region segmented image, and the method comprises the following steps:
global threshold segmentation is carried out on the enhanced image according to a fourth threshold value, and a fourth initial segmentation image is obtained;
and obtaining a character bright region segmentation image based on the fourth initial segmentation image and the third mask image.
In a second aspect, the present application provides a computer device comprising a memory and a processor, the memory having stored thereon a computer program for performing the steps of any of the methods described herein.
In a third aspect, the present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, is configured to implement the method for preprocessing a wafer character described above.
The beneficial effects of this application are: compared with the prior art, the method and the device have the advantages that the character dark area segmentation image and the character bright area segmentation image of the original image are obtained, the character dark area segmentation image and the character bright area segmentation image are respectively subjected to filtering processing, noise and interference can be removed, and the edge information of the characters is enhanced. And fusing the character segmentation image on the dark area and the character segmentation image under the bright area, which are obtained after the respective filtering processing, so as to obtain a fused image. Because the fusion image comprehensively considers the characteristics of the dark area and the bright area, the fusion image is a complete character segmentation image, so that the wafer characters to be processed are detected and identified based on the obtained fusion image, the accuracy of the wafer character identification can be improved, and the problem of inaccurate identification caused by directly detecting and identifying the characters on the original image is avoided.
In addition, since the height difference of the character area results in a half bright area and a half dark area of the character area, the related art adopts a global enhancement mode to enhance the whole image or a global feature of the image, but directly detects and identifies the character on the enhanced image, and cannot finely process the local feature of the character area, so that the problem of the height difference of the character area cannot be solved. According to the method and the device, the dark area and the bright area of the character can be analyzed and processed more carefully through filtering processing respectively on the dark area segmentation image and the bright area segmentation image of the character, namely, the dark area and the bright area are locally enhanced respectively, so that the accuracy of character recognition of the wafer is improved, the problem of inaccurate recognition caused by the height difference between the bright area and the dark area is avoided, and the accuracy of character recognition of the wafer is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flowchart of a first embodiment of a method for preprocessing wafer characters provided in the present application;
FIG. 2 is a schematic illustration of an original image;
FIG. 3 is a schematic view of a first segmented image;
FIG. 4 is a schematic illustration of a first mask image;
FIG. 5 is a schematic view of a character region segmentation image after coarse positioning;
FIG. 6 is a second mask image schematic;
FIG. 7 is a schematic diagram of a character segmentation image on dark areas;
FIG. 8 is a third mask image schematic;
FIG. 9 is a schematic illustration of a convolved image after sharpening;
FIG. 10 is a schematic diagram of a character segmentation image under a bright region;
FIG. 11 is a schematic diagram of a fused image;
FIG. 12 is a schematic view of a rotated fused image;
fig. 13 is a schematic structural diagram of an embodiment of a computer readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not limiting. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Optical character recognition refers to a technology of searching, extracting and recognizing characters in an image, and then a character recognition method is used for translating the shape into computer characters. In general, character detection and recognition is facilitated if the character area of the image is illumination stable. However, when the illumination of the character area is uneven, the detection and recognition accuracy is extremely lowered.
At present, in order to solve the problem that the detection and recognition accuracy is reduced due to uneven illumination of a character area, an image is generally subjected to enhancement processing to obtain an enhanced image, and then detection and recognition are performed based on the enhanced image. Common image enhancement modes include gamma correction, brightness correction, histogram enhancement, and the like.
Because the character areas have height differences, half of the character areas are bright areas and half of the character areas are dark areas, different strategies are needed to be adopted for processing the bright and dark areas of the characters respectively under the condition of uneven illumination of the character areas, and the existing image enhancement mode is a global enhancement mode, so that the condition of uneven illumination of the character areas cannot be solved.
Therefore, in order to solve the technical problem of low character recognition accuracy caused by directly detecting and recognizing characters on the enhanced image in the prior art, the application provides a preprocessing method for wafer characters, and the following embodiment is specifically referred to.
The following describes the method for preprocessing the wafer characters provided by the application in detail. Referring to fig. 1, fig. 1 is a flowchart illustrating a first embodiment of a method for preprocessing a wafer character according to the present application. The method is applied to the processor, and comprises the following steps:
step 110: and acquiring a character dark region segmentation image and a character bright region segmentation image of the original image. Wherein the original image contains the wafer character to be processed.
Wherein dark areas generally refer to darker colored portions of the character, and bright areas generally refer to lighter colored portions of the character.
The method comprises the steps of obtaining a wafer edge segmentation image and a character area segmentation image of an original image, and obtaining a character dark area segmentation image and a character bright area segmentation image based on the wafer edge segmentation image and the character area segmentation image.
Step 120: and respectively filtering the character dark region segmentation image and the character bright region segmentation image to obtain a character segmentation image on the dark region and a character segmentation image under the bright region, which are respectively corresponding to each other.
The filtering process may be morphological filtering process, smoothing filtering, sharpening filtering, etc.
For example, morphological filtering is performed on the character dark region segmentation image to segment the character in the upper half of the dark region, so as to obtain a corresponding character segmentation image on the dark region.
And similarly, carrying out morphological filtering processing on the character bright region segmentation image so as to segment the character at the lower half of the bright region, thereby obtaining a corresponding character segmentation image under the bright region.
Step 130: and fusing the character segmentation image in the dark area with the character segmentation image in the bright area to obtain a fused image.
Step 140: and identifying the wafer character to be processed based on the fusion image to obtain an identification result of the wafer character to be processed.
It should be noted that, the specific recognition process of the character of the wafer to be processed may be set according to actual needs, which is not limited herein.
According to the embodiment, the character dark area segmentation image and the character bright area segmentation image of the original image are obtained, and the character dark area segmentation image and the character bright area segmentation image are respectively subjected to filtering processing, so that noise and interference can be removed, and the edge information of the character can be enhanced. And fusing the character segmentation image on the dark area and the character segmentation image under the bright area, which are obtained after the respective filtering processing, so as to obtain a fused image. Because the fusion image comprehensively considers the characteristics of the dark area and the bright area, the fusion image is a complete character segmentation image, so that the wafer characters to be processed are detected and identified based on the obtained fusion image, the accuracy of the wafer character identification can be improved, and the problem of inaccurate identification caused by directly detecting and identifying the characters on the original image is avoided.
In addition, since the height difference of the character area results in a half bright area and a half dark area of the character area, the related art adopts a global enhancement mode to enhance the whole image or a global feature of the image, but directly detects and identifies the character on the enhanced image, and cannot finely process the local feature of the character area, so that the problem of the height difference of the character area cannot be solved. According to the method and the device, the dark area and the bright area of the character can be analyzed and processed more carefully through filtering processing respectively on the dark area segmentation image and the bright area segmentation image of the character, namely, the dark area and the bright area are locally enhanced respectively, so that the accuracy of character recognition of the wafer is improved, the problem of inaccurate recognition caused by the height difference between the bright area and the dark area is avoided, and the accuracy of character recognition of the wafer is improved.
Referring to a second embodiment of the method for preprocessing wafer characters provided in the present application, the second embodiment specifically includes the following steps.
Step 210: and acquiring a wafer edge segmentation image and a character region segmentation image of the original image.
Specifically, step 210 may include the steps of:
step 11: and carrying out global constant value segmentation on the original image according to a first preset threshold value to obtain a wafer edge segmentation image.
Due to the limitation of structural space, the strip-shaped light source is arranged above the wafer, and imaging of the edge of the wafer is high in brightness and stable. When a wafer is illuminated by a light source, the edges will appear to have a different brightness or color than the background or other areas due to uneven illumination or reflection. The imaging characteristics enable the edge of the wafer to be obviously distinguished from other parts in the image, so that the edge of the wafer can be positioned through threshold segmentation according to the imaging characteristics of the edge of the wafer, for example, a threshold is set, so that pixels of the edge of the wafer are distinguished from pixels of the background or other parts.
The first preset threshold value can be set according to actual conditions.
Step 12: and obtaining the outline of the wafer edge based on the wafer edge segmentation image.
Illustratively, global constant segmentation is performed on an original image (as shown in fig. 2) according to a first preset threshold t1 to obtain a wafer edge segmented image, i.e., a first segmented image (as shown in fig. 3). The first segmented image includes a target region and a non-target region. If the gray value is greater than or equal to t1, then this pixel will be marked as a highlighted (or target) area, otherwise it will be marked as a non-target area. And traversing each target area in the first segmentation image, and selecting the target area with the largest area as a wafer edge highlight area, wherein the outline of the wafer edge can be determined from the highlight area of the wafer edge.
Step 13: based on the outline of the wafer edge, the initial boundary of the character area is obtained.
Step 13 may include the following procedures:
1) And obtaining the lower boundary of the wafer edge based on the leftmost point position and the rightmost point position of the contour.
In opencv, the outer contours are sorted counterclockwise, and after the contours of the wafer edge are positioned, the lower boundary of the wafer edge can be extracted by finding the leftmost point and the rightmost point of the contours of the wafer edge.
2) The upper and lower boundaries of the character area are obtained based on the lower side edges.
For example, if the lower boundary is a, translating a downward by a preset distance d1, the upper boundary of the character area can be obtained; setting a height h1, and translating A downward by d1+h1, the lower boundary of the character area can be obtained.
The following translation formula, i.e., formula one, may be used to calculate the boundaries of the character areas:
(equation one).
Wherein x ', y' refer to the translated imageThe pixel coordinates, x, y refer to the pixel coordinates before translation, t x 、t y Refers to the amount of translation.
For example, y is the pixel coordinate of the lower boundary A before translation, y+t y It may be indicated that the lower side a of the wafer edge is translated downward by d1 distance, and the resulting y' may be the upper side boundary of the character area coarse positioning.
3) And obtaining the initial left and right boundaries of the region corresponding to the character of the wafer to be processed based on the leftmost point position and the rightmost point position of the outline of the edge of the wafer.
4) The upper and lower boundaries and the initial left and right boundaries of the character area are used as four boundaries of the character area, namely the initial boundaries.
Step 14: and obtaining a first mask image of the original image based on the initial boundary of the character area.
Wherein the size of the first mask image is the same as the original image.
Step 15: based on the first mask image, a character region segmentation image of the original image is obtained.
The first gray average value m1 of the original image is calculated in the effective area of the first mask image (as shown in fig. 4), the first gray average value m1 is used as a second threshold value t2, the original image is subjected to global threshold segmentation according to the second threshold value t2, a segmented image is obtained, the intersection of the segmented image and the first mask image is obtained, and a character area segmentation image (as shown in fig. 5) which is a second segmentation image is obtained.
Traversing a plurality of target areas in the second divided image, wherein the largest area is the area where the characters are located. The left and right boundaries of the character region may be derived based on the region in which the character is located.
Step 220: and obtaining a character dark region segmentation image based on the wafer edge segmentation image and the character region segmentation image.
In some embodiments, step 220 may include the following procedure:
step 21: and obtaining an initial boundary of the dark area based on the wafer edge segmentation image and the character area segmentation image.
Step 21 may include the following procedures:
1) And obtaining the upper and lower boundaries of the dark region based on the wafer edge segmentation image.
The lower side boundary A of the wafer edge can be obtained based on the wafer edge segmentation image, and the lower side boundary A of the wafer edge is respectively translated downwards by a preset distance, so that the upper side boundary and the lower side boundary of the character dark area can be respectively obtained. For example, translating the lower boundary A of the wafer edge downwards by a distance d2 to obtain the upper boundary of the dark character area; and translating the lower side edge of the wafer edge downwards by a distance d3 to obtain the lower side edge of the character dark area.
2) The image is segmented based on the character region to obtain the left and right boundaries of the dark region.
The method comprises the steps of selecting a region with the largest area from a character region segmentation image, namely, a region where a character is located, obtaining a left boundary and a right boundary of the character region according to the region where the character is located, and taking the left boundary and the right boundary of the character region as a left boundary and a right boundary of a dark region of the character respectively.
3) The upper and lower boundaries and the left and right boundaries of the dark region are taken as the initial boundaries of the dark region.
Step 22: a second mask image of the original image is obtained based on the initial boundaries of the dark areas.
Wherein the second mask image (as shown in fig. 6) has the same size as the original image.
Step 23: and acquiring a second gray level mean value and a second standard deviation of the original image in a preset area of the second mask image.
The preset area is an effective area.
Step 24: and obtaining a third threshold value based on the second gray level mean value and the second standard deviation, and performing global threshold segmentation on the original image according to the third threshold value to obtain a third initial segmentation image.
Wherein the segmentation threshold, such as the third threshold, may be calculated using the following equation two:
(equation II).
Wherein thres is the segmentation threshold, m xy Is gray scale mean value, sigma xy For the gray standard deviation, a and b are coefficients.
Step 25: and obtaining a character dark region segmentation image based on the third initial segmentation image and the second mask image.
Wherein the segmented image can be obtained using the following equation three:
(equation three).
Where n is the logical operation of the intersection, i.e., bit AND operation.
For segmented images obtained after intersection, such as a third segmented image, i.e. a character dark region segmented image, +.>For example, a third initial segmented image, which is a segmented image that participates in the intersection operation.
For steps 22-25, calculating a second gray level mean value m2 and a second standard deviation sigma 2 of the original image in the effective area of the second mask image, taking the second gray level mean value m2 and the second standard deviation sigma 2, namely m2+sigma 2, as a third threshold t3, performing global threshold segmentation on the original image to obtain a segmented image, and solving an intersection of the segmented image and the second mask image to obtain a third segmented image, namely a character dark area segmented image.
Step 230: and respectively filtering the character dark region segmentation image and the character bright region segmentation image to obtain a character segmentation image on the dark region and a character segmentation image under the bright region, which are respectively corresponding to each other.
The character dark area segmentation image is subjected to morphological filtering, so that the character in the upper half of the dark area can be segmented, and the character segmentation image in the dark area is obtained.
Step 240: and fusing the character segmentation image in the dark area with the character segmentation image in the bright area to obtain a fused image.
Step 250: and obtaining a recognition result of the wafer character to be processed based on the fusion image.
The steps 230, 240 and 250 have the same or similar technical solutions as those of the related steps of the first embodiment, and are not described herein.
In some embodiments, acquiring a character bright region segmentation image may include the steps of:
step 31: and dividing the image based on the character dark area to obtain a fine boundary of the character dark area.
Wherein the fine upper boundary of the dark area of the character can be obtained by dividing the image by the character on the dark area.
And the fine lower boundary, the fine left boundary and the fine right boundary of the character dark region can be obtained by carrying out area judgment and screening on the character dark region segmentation image.
Step 32: and generating a third mask image according to the fine boundary of the dark character area and the preset character height.
The preset character height can be determined according to the character size of the actual situation.
Step 33: and obtaining a mask image of the character bright area based on the third mask image and the second mask image.
And performing exclusive OR logic operation on the third mask image and the second mask image to obtain the mask image of the bright character region. Specifically, the following formula four may be employed.
(equation four).
Wherein,is an exclusive OR of logical operations,>mask image for bright area of character, +.>For the second mask image, +.>Is the third mask image.
Step 34: and carrying out convolution processing on the original image to obtain a convolution image.
The convolution process may be a laplace convolution process.
Illustratively, the laplace operator may be as shown in table one below.
Table one:
step 35: and obtaining an enhanced image of the original image based on the convolution image and the original image.
Wherein, the enhanced image of the original image can be obtained by adopting the following formula:
(equation five).
Wherein,to enhance the image, f (x, y) is the original image, ++>The coefficient is represented by c, which is an image after laplace convolution, i.e., a convolution image.
Step 36: and acquiring a third gray average value and a third standard deviation of the enhanced image in a preset area of the mask image of the character bright area.
The third gray average value and the third standard deviation may refer to the calculation of the second gray average value and the second standard deviation, which are not described herein.
Step 37: and obtaining a fourth threshold value based on the third gray average value and the third standard deviation, and performing global threshold segmentation on the enhanced image according to the fourth threshold value to obtain a character bright region segmented image.
The enhanced image can be subjected to global threshold segmentation according to a fourth threshold to obtain a fourth initial segmentation image; and obtaining a character bright region segmentation image based on the fourth initial segmentation image and the third mask image.
The intersection of the fourth initial segmentation image and the third mask image may be obtained to obtain a character bright region segmentation image, i.e., a fourth segmentation image.
Based on the above embodiments, the method for preprocessing the wafer characters provided in the present application mainly includes the following procedures:
first, locating wafer edges
1) And performing global constant value segmentation on the original image according to a first preset threshold t1 to obtain a first segmented image, namely a wafer edge segmented image.
Wherein the first segmented image comprises a target region and a non-target region. If the gray value is greater than or equal to t1, then this pixel will be marked as a highlighted (or target) area, otherwise it will be marked as a non-target area.
2) And traversing a plurality of target areas in the first segmentation image, and selecting the area with the largest area as the wafer edge highlight area.
Wherein the original image is shown in fig. 2, and the first divided image is shown in fig. 3.
Coarse positioning of character area (according to the outline of wafer edge, obtaining boundary of character area)
1) Obtaining a wafer edge contour from the wafer edge highlight region, and obtaining a lower side boundary A of the wafer edge according to the leftmost point position and the rightmost point position of the wafer edge contour;
2) Translating the lower side edge A of the wafer edge downwards by d1 distance to obtain the upper side boundary of the character area coarse positioning; setting a height h1, and translating the lower boundary A downwards by d1+h1 to obtain the lower boundary of the rough positioning of the character area.
3) And determining the left boundary of the character area according to the leftmost point of the wafer edge outline, and determining the right boundary of the character area according to the rightmost point of the wafer edge.
4) The four boundaries of the upper boundary, the lower boundary, the left boundary and the right boundary of the rough positioning of the character area obtained in the steps 2) and 3) generate a first mask image with the same size as the original image. (i.e., the leftmost point location and the rightmost point location of the wafer edge contour are first taken as the initial left boundary and the initial right boundary of the character region).
Wherein the first mask image is shown in fig. 4.
The following translation formula, namely formula one, can be adopted to calculate the boundary of the coarse positioning of the character area:
(equation one).
Wherein x ', y' refer to pixel coordinates after translation, x, y refer to pixel coordinates before translation, t x 、t y Refers to the amount of translation.
For example, y is the pixel coordinate of the lower boundary A before translation, y+t y It may be indicated that the lower side edge a of the wafer edge is translated downward by d1 distance, and the resulting y' is the upper side boundary of the character area coarse positioning.
5) Calculating a first gray average value m1 of an original image in an effective area of the first mask image, taking the first gray average value m1 as a second threshold value t2, performing global threshold segmentation on the original image according to the second threshold value t2 to obtain a segmented image, and obtaining an intersection of the segmented image and the first mask image to obtain a second segmented image;
traversing a plurality of target areas in the second divided image, wherein the largest area is the area where the characters are located. The left and right boundaries of the character region may be derived based on the region in which the character is located.
The second segmented image, i.e., the character region segmented image after rough positioning, is shown in fig. 5.
Wherein the threshold value can be calculated using the following equation two:
(equation II).
Wherein thres refers to the segmentation threshold, m xy Mean gray scale, sigma xy Refer to gray standard deviation, and a and b are coefficients.
Wherein the segmented image can be obtained using the following equation three:
(equation three).
Where n is the logical operation of the intersection, i.e., bit AND operation.
For segmented images obtained after intersection, e.g. second segmented image, < >>For the segmented image participating in the intersection operation, for example, the original image is subjected to global threshold segmentation according to a second threshold t2, so as to obtain a segmented image.
(III) positioning and processing the character dark area (based on the lower boundary A of the wafer edge and the left and right boundaries of the character area, the boundary of the character dark area is obtained)
1) The upper side boundary and the lower side boundary of the character dark area can be obtained based on the lower side boundary A of the wafer edge, and the left side boundary and the right side boundary obtained by coarsely positioning the character area are respectively used as the left side boundary and the right side boundary of the character dark area.
Specifically, the lower boundary A of the wafer edge is translated downward by d2 distance and d3 distance respectively, so as to obtain the upper boundary and the lower boundary of the dark character region.
2) And generating a second mask image with the same size as the original image according to the four initial boundaries of the dark areas of the characters.
The four initial boundaries of the dark character area are 1), namely an upper boundary, a lower boundary, a left boundary and a right boundary.
Wherein the second mask image is shown in fig. 6.
3) A second gray scale mean m2 and a second standard deviation sigma 2 of the original image are calculated in the effective area of the second mask image. Taking the second gray level mean value m2+the second standard deviation sigma 2 as a third preset threshold t3, and performing global threshold segmentation on the original image to obtain a segmented image, namely a third initial segmented image; and obtaining a third segmentation image, namely a character dark region segmentation image, by solving an intersection of the third initial segmentation image and the second mask image.
4) The third divided image is subjected to a filtering process to obtain a character-divided image on the dark area shown in fig. 7, and the upper half character of the dark area is extracted from the character-divided image on the dark area shown in fig. 7. And meanwhile, judging and screening according to the area in the third segmentation image to obtain a finer character left boundary, a finer character right boundary and a finer character upper boundary, and obtaining a fine boundary of a dark character area.
(IV) character Bright region positioning and processing
1) And generating a third Mask image according to the character height from the fine boundary of the dark character region, and performing exclusive OR logic operation on the third Mask image and the second Mask image to obtain a final Mask, namely the Mask image of the bright character region.
Specifically, the following formula four may be adopted.
(equation four).
Wherein,is an exclusive OR of logical operations,>mask image for bright area of character, +.>For the second mask image, +.>Is the third mask image.
Wherein the third mask image is shown in fig. 8.
2) And carrying out Laplace convolution processing on the original image to obtain a convolution image, and adding the convolution image to the original image to obtain an enhanced image.
The sharpened convolution image is shown in fig. 9.
Wherein the laplace operator may be as shown in table one above.
3) And calculating a third gray average value m3 and a third standard deviation sigma 3 of the enhanced image in an effective area of the third mask image, taking the third gray average value m3 and the third standard deviation sigma 3 as a fourth preset threshold t4, performing global threshold segmentation on the enhanced image to obtain a segmented image, and solving an intersection of the segmented image and the third mask image to obtain a fourth mask image.
The third gray average value and the third standard deviation may refer to the calculation of the second gray average value and the second standard deviation, which are not described herein.
4) And filtering the fourth mask image to obtain a lower half character image of the bright region, namely a character segmentation image of the bright region.
Wherein the character segmentation image under the bright area is shown in fig. 10.
5) And fusing the third mask image and the fourth mask image to obtain a complete character region segmentation image, namely a fused image.
Wherein the fused image is shown in fig. 11.
Wherein, the enhanced image of the original image can be obtained by adopting the following formula:
(equation five).
Wherein,to enhance the image, f (x, y) is the original image, ++>The coefficient is represented by c, which is an image after laplace convolution, i.e., a convolution image.
The fused image is rotated 180 degrees clockwise for angle correction as shown in fig. 12. Subsequent character recognition may read the complete wafer character to be processed according to fig. 12.
Wherein the rotated fusion image is shown in fig. 12.
It should be noted that the parameters of the translation distance, the character height, the coefficient, and the like are determined by the specific situation of the character of the wafer to be processed.
The application also provides an electronic device comprising a memory and a processor, wherein the memory stores a computer program; the processor is configured to implement a control method of the parsing method of the device provided by any one of the foregoing method embodiments when executing the computer program.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an embodiment of a computer readable storage medium provided in the present application, where the computer readable storage medium 90 is used to store a computer program 91, and the computer program 91 when executed by a processor is used to implement the following method steps:
acquiring a character dark region segmentation image and a character bright region segmentation image of an original image, wherein the original image contains wafer characters to be processed;
respectively filtering the character dark region segmentation image and the character bright region segmentation image to obtain a character segmentation image on a dark region and a character segmentation image under a bright region, which are respectively corresponding to each other;
fusing the character segmentation image in the dark area with the character segmentation image in the bright area to obtain a fused image;
and identifying the wafer character to be processed based on the fusion image to obtain an identification result of the wafer character to be processed.
It will be appreciated that the computer program 91, when executed by a processor, is also operative to implement aspects of any of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatuses may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units of the other embodiments described above may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as stand alone products. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is only the embodiments of the present application, and not the patent scope of the present application is limited by the foregoing description, but all equivalent structures or equivalent processes using the contents of the present application and the accompanying drawings, or directly or indirectly applied to other related technical fields, which are included in the patent protection scope of the present application.

Claims (10)

1. A method for preprocessing wafer characters, the method comprising:
acquiring a character dark region segmentation image and a character bright region segmentation image of an original image, wherein the original image contains wafer characters to be processed;
respectively filtering the character dark region segmentation image and the character bright region segmentation image to obtain a character segmentation image on a dark region and a character segmentation image under a bright region which are respectively corresponding to each other;
fusing the character segmentation image on the dark area with the character segmentation image on the bright area to obtain a fused image;
and identifying the wafer character to be processed based on the fusion image to obtain an identification result of the wafer character to be processed.
2. The method of claim 1, wherein the acquiring the character dark area segmentation image of the original image comprises:
acquiring a wafer edge segmentation image and a character region segmentation image of the original image;
and obtaining the character dark region segmentation image based on the wafer edge segmentation image and the character region segmentation image.
3. The method of claim 2, wherein the acquiring the wafer edge segmentation image and the character region segmentation image of the original image comprises:
global constant value segmentation is carried out on the original image according to a first preset threshold value, and a wafer edge segmentation image is obtained;
based on the wafer edge segmentation image, obtaining the outline of the wafer edge;
based on the outline of the wafer edge, obtaining an initial boundary of a character area;
obtaining a first mask image of the original image based on the initial boundary of the character area;
and obtaining a character region segmentation image of the original image based on the first mask image.
4. The method of claim 3, wherein the deriving the initial boundary of the character area based on the outline of the wafer edge comprises:
obtaining the lower boundary of the wafer edge based on the leftmost point position and the rightmost point position of the contour;
obtaining upper and lower boundaries of the character area based on the lower side edges;
obtaining initial left and right boundaries of the region corresponding to the wafer character to be processed based on the leftmost point position and the rightmost point position of the wafer edge outline;
and taking the upper and lower boundaries and the initial left and right boundaries of the character area as initial boundaries of the character area.
5. The method of claim 2, wherein the obtaining the character dark region segmentation image based on the wafer edge segmentation image and the character region segmentation image comprises:
obtaining an initial boundary of a dark area based on the wafer edge segmentation image and the character area segmentation image;
obtaining a second mask image of the original image based on the initial boundary of the dark area;
acquiring a second gray level mean value and a second standard deviation of the original image in a preset area of the second mask image;
obtaining a third threshold value based on the second gray level mean value and the second standard deviation, and performing global threshold segmentation on the original image according to the third threshold value to obtain a third initial segmentation image;
and obtaining the character dark region segmentation image based on the third initial segmentation image and the second mask image.
6. The method of claim 5, wherein the obtaining an initial boundary of a dark region based on the wafer edge segmentation image and the character region segmentation image comprises:
obtaining upper and lower boundaries of a dark region based on the wafer edge segmentation image;
dividing an image based on the character region to obtain left and right boundaries of the dark region;
and taking the upper and lower boundaries and the left and right boundaries of the dark region as initial boundaries of the dark region.
7. The method of claim 5, wherein acquiring the character bright region segmentation image comprises:
dividing an image based on the character dark region to obtain a fine boundary of the character dark region;
generating a third mask image according to the fine boundary of the dark character area and the preset character height;
obtaining a mask image of a character bright area based on the third mask image and the second mask image;
carrying out convolution processing on the original image to obtain a convolution image;
obtaining an enhanced image of the original image based on the convolution image and the original image;
acquiring a third gray average value and a third standard deviation of the enhanced image in a preset area of the mask image of the character bright area;
and obtaining a fourth threshold value based on the third gray average value and the third standard deviation, and performing global threshold segmentation on the enhanced image according to the fourth threshold value to obtain the character bright region segmentation image.
8. The method of claim 7, wherein the global thresholding of the enhanced image to the fourth threshold to obtain the character bright region segmented image comprises:
global threshold segmentation is carried out on the enhanced image according to the fourth threshold to obtain a fourth initial segmentation image;
and obtaining the character bright region segmentation image based on the fourth initial segmentation image and the third mask image.
9. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-8.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1-8.
CN202410144365.5A 2024-02-01 2024-02-01 Wafer character preprocessing method, device and storage medium Active CN117690142B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410144365.5A CN117690142B (en) 2024-02-01 2024-02-01 Wafer character preprocessing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410144365.5A CN117690142B (en) 2024-02-01 2024-02-01 Wafer character preprocessing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN117690142A true CN117690142A (en) 2024-03-12
CN117690142B CN117690142B (en) 2024-05-28

Family

ID=90126874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410144365.5A Active CN117690142B (en) 2024-02-01 2024-02-01 Wafer character preprocessing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117690142B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004094967A (en) * 2003-10-20 2004-03-25 Nippon Telegr & Teleph Corp <Ntt> Method and device for deciding character area and recording medium
JP2005276188A (en) * 2004-02-26 2005-10-06 Yokohama Tlo Co Ltd Handwritten character removing image processor and handwritten character removing image processing method
JP2006053523A (en) * 2004-03-16 2006-02-23 Pioneer Electronic Corp Image processing apparatus, display device, image processing method, and program
CN102509095A (en) * 2011-11-02 2012-06-20 青岛海信网络科技股份有限公司 Number plate image preprocessing method
CN104361336A (en) * 2014-11-26 2015-02-18 河海大学 Character recognition method for underwater video images
CN106650728A (en) * 2016-12-09 2017-05-10 浙江浩腾电子科技股份有限公司 Shadow license plate image binarization method
CN108205675A (en) * 2016-12-20 2018-06-26 浙江宇视科技有限公司 The processing method and equipment of a kind of license plate image
CN112053367A (en) * 2019-06-06 2020-12-08 阿里巴巴集团控股有限公司 Image processing method, apparatus and storage medium
CN112967215A (en) * 2021-03-03 2021-06-15 辽宁工程技术大学 Retinex image enhancement algorithm based on Laplacian pyramid reconstruction
CN114235758A (en) * 2021-12-10 2022-03-25 苏州凌云视界智能设备有限责任公司 Defect detection method, device, equipment and storage medium
CN114581901A (en) * 2022-03-14 2022-06-03 浙江广厦建设职业技术大学 Method for extracting edges of ancient building wall contaminated inscription character images
CN115578284A (en) * 2022-07-18 2023-01-06 芯动微电子科技(珠海)有限公司 Multi-scene image enhancement method and system
CN116503871A (en) * 2023-03-29 2023-07-28 安徽省配天机器人集团有限公司 Character segmentation preprocessing method, terminal device and computer readable storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004094967A (en) * 2003-10-20 2004-03-25 Nippon Telegr & Teleph Corp <Ntt> Method and device for deciding character area and recording medium
JP2005276188A (en) * 2004-02-26 2005-10-06 Yokohama Tlo Co Ltd Handwritten character removing image processor and handwritten character removing image processing method
JP2006053523A (en) * 2004-03-16 2006-02-23 Pioneer Electronic Corp Image processing apparatus, display device, image processing method, and program
CN102509095A (en) * 2011-11-02 2012-06-20 青岛海信网络科技股份有限公司 Number plate image preprocessing method
CN104361336A (en) * 2014-11-26 2015-02-18 河海大学 Character recognition method for underwater video images
CN106650728A (en) * 2016-12-09 2017-05-10 浙江浩腾电子科技股份有限公司 Shadow license plate image binarization method
CN108205675A (en) * 2016-12-20 2018-06-26 浙江宇视科技有限公司 The processing method and equipment of a kind of license plate image
CN112053367A (en) * 2019-06-06 2020-12-08 阿里巴巴集团控股有限公司 Image processing method, apparatus and storage medium
CN112967215A (en) * 2021-03-03 2021-06-15 辽宁工程技术大学 Retinex image enhancement algorithm based on Laplacian pyramid reconstruction
CN114235758A (en) * 2021-12-10 2022-03-25 苏州凌云视界智能设备有限责任公司 Defect detection method, device, equipment and storage medium
CN114581901A (en) * 2022-03-14 2022-06-03 浙江广厦建设职业技术大学 Method for extracting edges of ancient building wall contaminated inscription character images
CN115578284A (en) * 2022-07-18 2023-01-06 芯动微电子科技(珠海)有限公司 Multi-scene image enhancement method and system
CN116503871A (en) * 2023-03-29 2023-07-28 安徽省配天机器人集团有限公司 Character segmentation preprocessing method, terminal device and computer readable storage medium

Also Published As

Publication number Publication date
CN117690142B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN110717489B (en) Method, device and storage medium for identifying text region of OSD (on Screen display)
CN104751142B (en) A kind of natural scene Method for text detection based on stroke feature
US9235762B2 (en) Iris data extraction
EP1229493B1 (en) Multi-mode digital image processing method for detecting eyes
CN113781402A (en) Method and device for detecting chip surface scratch defects and computer equipment
CN111915704A (en) Apple hierarchical identification method based on deep learning
JP2002259994A (en) Automatic image pattern detecting method and image processor
Alkoffash et al. A survey of digital image processing techniques in character recognition
JP2002342756A (en) Method for detecting position of eye and mouth in digital image
CN104239909A (en) Method and device for recognizing images
US20160180198A1 (en) System and method for determining clutter in an acquired image
CN109241973A (en) A kind of full-automatic soft dividing method of character under grain background
CN112686265A (en) Hierarchic contour extraction-based pictograph segmentation method
CN112634288A (en) Equipment area image segmentation method and device
CN115439523A (en) Method and equipment for detecting pin size of semiconductor device and storage medium
CN112258532B (en) Positioning and segmentation method for callus in ultrasonic image
CN111898408B (en) Quick face recognition method and device
CN117853510A (en) Canny edge detection method based on bilateral filtering and self-adaptive threshold
CN113076952A (en) Method and device for automatically identifying and enhancing text
Choukikar et al. Segmenting the optic disc in retinal images using thresholding
Feng et al. A weighted-ROC graph based metric for image segmentation evaluation
CN117690142B (en) Wafer character preprocessing method, device and storage medium
CN112614138A (en) Image processing apparatus, image processing system, storage medium, and image processing method
CN116363097A (en) Defect detection method and system for photovoltaic panel
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant