CN113256700A - Layer thickness detection method and device, electronic equipment and readable storage medium - Google Patents
Layer thickness detection method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN113256700A CN113256700A CN202110581681.5A CN202110581681A CN113256700A CN 113256700 A CN113256700 A CN 113256700A CN 202110581681 A CN202110581681 A CN 202110581681A CN 113256700 A CN113256700 A CN 113256700A
- Authority
- CN
- China
- Prior art keywords
- detection
- layer
- target
- determining
- average image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/22—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
- G01N23/225—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
- G01N23/2251—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion using incident electron beams, e.g. scanning electron microscopy [SEM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/60—Specific applications or type of materials
- G01N2223/633—Specific applications or type of materials thickness, density, surface weight (unit area)
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The embodiment of the application provides a layer thickness detection method and device, electronic equipment and a readable storage medium, wherein the layer thickness detection method comprises the following steps: acquiring a target image; preprocessing the target image to obtain a plurality of detection areas corresponding to the target image; acquiring a first average image intensity of each detection area in the plurality of detection areas, wherein each detection area comprises a plurality of first average image intensities; determining the layer boundary of the partial image corresponding to each detection area according to the plurality of first average image intensities of each detection area; and determining the layer thickness of the target image according to a plurality of layer boundaries which are in one-to-one correspondence with the plurality of detection areas. Therefore, the accuracy of the obtained layer thickness result can be improved, and the repeatability and reproducibility of the detection result can be ensured because the detection process does not depend on manual measurement of a user.
Description
Technical Field
The application relates to the technical field of image processing, in particular to a layer thickness detection method and device, electronic equipment and a readable storage medium.
Background
Currently, thickness detection of oxide bulk layers in ONOP (O for oxide, N for nitride, and P for polysilicon) via hole structures relies on using computer vision edge detection techniques to roughly determine layer boundaries and then manually calculate the image thickness with rough boundaries. Thickness measurements are done by manually drawing distance measurement lines between the rough boundaries and sampling the layer thickness in various parts of the image. However, the computer vision edge detection technology can only work reliably when the image is clear enough, and the ONOP channel hole structure is small, so that the microscope image is very noisy, and when a user measures the thickness, the user always selects the clearest part of the image to manually detect the thickness, so that the problems of excessive sampling of the clear part of the microscope image and insufficient sampling of a noisy area occur, the measurement error is large, and the measured value cannot be reproduced.
Disclosure of Invention
The embodiment of the application provides a layer thickness detection method and device, electronic equipment and a readable storage medium, so as to reduce measurement errors and improve the accuracy of thickness detection.
In a first aspect, an embodiment of the present application provides a method for detecting a layer thickness, including:
acquiring a target image, wherein the target image comprises a target image layer;
preprocessing the target image to obtain a plurality of detection areas, wherein each detection area in the plurality of detection areas corresponds to one part of the target image layer, and each detection area comprises two boundaries of the target image layer;
acquiring a first average image intensity of each detection area in the plurality of detection areas, wherein each detection area comprises a plurality of first average image intensities;
determining layer boundaries of part of target layers corresponding to each detection area according to the plurality of first average image intensities in each detection area;
and determining the layer thickness of the target layer according to a plurality of layer boundaries which are in one-to-one correspondence with the plurality of detection areas.
In a second aspect, an embodiment of the present application provides an apparatus for detecting a layer thickness, including:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a target image, and the target image comprises a target layer;
the processing unit is used for preprocessing the target image to obtain a plurality of detection areas, each detection area in the plurality of detection areas corresponds to one part of the target image layer, and each detection area comprises two boundaries of the target image layer;
a second acquisition unit configured to acquire a first average image intensity of each of the plurality of detection areas, wherein each of the plurality of detection areas includes a plurality of first average image intensities;
a first determining unit, configured to determine, according to the multiple first average image intensities in each detection area, a layer boundary of a portion of a target layer corresponding to each detection area;
and the second determining unit is used for determining the layer thickness of the target layer according to a plurality of layer boundaries which correspond to the plurality of detection areas one by one.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and one or more programs, stored in the memory and configured to be executed by the processor, where the program includes instructions for executing the steps in any of the methods described in the first or second aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in any one of the methods of the first aspect or the second aspect of the embodiments of the present application.
It can be seen that, in the embodiment of the present application, a target image is first obtained, then the target image is preprocessed to obtain a plurality of detection regions, then a first average image intensity of each detection region in the plurality of detection regions is obtained, where each detection region includes a plurality of first average image intensities, then a layer boundary of a part of a target layer corresponding to each detection region is determined according to the plurality of first average image intensities in each detection region, and finally, a layer thickness of the target layer is determined according to a plurality of layer boundaries corresponding to the plurality of detection regions one to one. Therefore, the accuracy of the obtained layer thickness result can be improved, and the repeatability and reproducibility of the detection result can be ensured because the detection process does not depend on manual measurement of a user.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1a is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 1b is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
fig. 1c is a schematic view of an ONOP channel hole structure according to an embodiment of the present disclosure;
FIG. 1d is a schematic representation of a microscopic image of an oxidized block provided in an embodiment of the present application;
fig. 2a is a schematic flowchart of a layer thickness detection method according to an embodiment of the present application;
fig. 2b is a schematic diagram of normal line acquisition in a layer thickness detection method according to an embodiment of the present application;
fig. 2c is a schematic diagram of an initial mark point of a method for detecting a layer thickness according to an embodiment of the present application;
fig. 2d is a schematic diagram illustrating a mark line acquisition in a layer thickness detection method according to an embodiment of the present application;
fig. 2e is a schematic diagram illustrating determining a layer boundary according to a layer thickness detection method provided in an embodiment of the present application;
fig. 2f is a schematic diagram illustrating determining a layer thickness according to a layer thickness detection method provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of another layer thickness detection method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus for detecting a layer thickness according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of another layer thickness detection apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to better understand the technical solution of the embodiment of the present application, an electronic device and a layer thickness detection system that may be related to the embodiment of the present application are first described below.
Referring to fig. 1a, fig. 1a is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device 101 includes a layer thickness detection device 102, and the layer thickness detection device 102 is configured to detect a layer thickness in an image. The layer thickness detection device 102 preprocesses a target image acquired by the electronic device 101, and detects the layer thickness of the target image based on the preprocessed target image. After obtaining the layer thickness determined by the layer thickness detection apparatus 102, the electronic device 101 may send the detection result to another corresponding electronic device, or directly display the detection result on a screen of the electronic device 101.
Specifically, the electronic device shown in fig. 1a may further include a structure as follows, please refer to fig. 1b, where fig. 1b is a schematic structural diagram of another electronic device provided in the embodiment of the present application. As shown in the figure, the electronic device may implement the steps in the layer thickness detection method, where the electronic device 100 includes an application processor 120, a memory 130, a communication interface 140, and one or more programs 131, where the one or more programs 131 are stored in the memory 130 and configured to be executed by the application processor 120, and the one or more programs 131 include instructions for executing any step in the following method embodiments.
The communication unit is used for supporting the communication between the first electronic equipment and other equipment. The terminal may further include a storage unit for storing program codes and data of the terminal.
The Processing Unit may be an Application Processor 120 or a controller, such as a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, units, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication unit may be the communication interface 140, the transceiver, the transceiving circuit, etc., and the storage unit may be the memory 130.
The memory 130 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
In a specific implementation, the application processor 120 is configured to perform any one of the steps performed by the electronic device in the method embodiments described below, and when performing data transmission such as sending, optionally invokes the communication interface 140 to complete the corresponding operation.
Referring to fig. 1c, fig. 1c is a schematic diagram of an ONOP via hole structure provided in an embodiment of the present application, where (a) in fig. 1c is a perspective view of the ONOP via hole structure, and (b) in fig. 1c is a cross-sectional view of the ONOP via hole structure, as can be seen from the figure, the ONOP via hole structure includes multiple layers, and each layer can be regarded as an approximate circular ring. Referring to fig. 1d, fig. 1d is a schematic diagram of an oxide block microscopic image according to an embodiment of the present application, and an approximate circle indicated by an arrow in fig. 1d is an oxide block layer, it can be seen that the radius of the via hole in the ONOP via hole structure is small, and the microscopic image is very noisy. At present, the thickness detection of the oxidation block layer in the ONOP channel hole structure depends on roughly determining the layer boundary by using a computer vision edge detection technology, and then manually calculating the image thickness with a rough boundary, but because the microscopic image with a small radius of the channel hole is not clear, and the thickness detection mainly depends on manual operation, the detected result has larger error, and the reproducibility of the measured result cannot be realized.
With reference to the above description, the following describes steps performed by a layer thickness detection method from the perspective of a method example, please refer to fig. 2a, and fig. 2a is a schematic flow chart of a layer thickness detection method according to an embodiment of the present application. As shown in the figure, the layer thickness detection method includes:
s201, acquiring a target image.
The target image includes a target layer, that is, the target image may be an image including a layer whose thickness needs to be measured, and the target image may also be an image corresponding to the layer whose thickness needs to be detected. The target image may be a microscopic image acquired by a Microscope, such as an object image acquired by a Transmission Electron Microscope (TEM), a Scanning Electron Microscope (SEM), and the like.
S202, preprocessing the target image to obtain a plurality of detection areas.
Each detection area in the plurality of detection areas corresponds to a part of the target layer, and each detection area comprises two boundaries of the target layer. The target image comprises an oxidation block image layer with the thickness needing to be detected, and the oxidation block image layer is in a shape similar to a circular ring in the target image, so that the detection area corresponds to the oxidation block image layer and is also arranged in a circular ring shape. That is, the oxide block pattern layer is covered with the entire detection area.
In one possible example, the preprocessing the target image to obtain a plurality of detection regions corresponding to the target image includes: acquiring a plurality of mark points according to the target image, wherein the mark points are positioned in the target image layer; acquiring a normal corresponding to each marking point in the plurality of marking points; and acquiring a plurality of detection areas according to the normal, wherein each marking point corresponds to one detection area.
The mark points are positioned in the oxidation block layer, the oxidation block layer can be divided into a plurality of sections through the mark points, namely, the plurality of mark points are connected and are also in a shape similar to a circular ring. Each marker point has a normal line corresponding to the marker point, and a detection area may be an area consisting of two of the normal lines. As shown in fig. 2b, fig. 2b is a schematic diagram of normal line acquisition in a layer thickness detection method according to an embodiment of the present application, and as shown in the drawing, (a) in fig. 2b includes 3 mark points, which are a (x) respectively1,y1),B(x2,y2),C(x3,y3) At this time, if the normal of the mark point B is to be obtained, a connection line between the mark point a and the mark point C may be obtained first, then a parallel line having an intersection point with the mark point B in the parallel line of the connection line is determined as a tangent of the mark point B, then a perpendicular line of the tangent is made through the mark point B, and then the normal of the mark point B may be obtained, and so on, the normal of each mark point may be obtained, as shown in (B) in fig. 2B.
Therefore, the normal is determined according to the mark points, the detection areas are determined according to the normal, and the layer thickness is determined according to each detection area, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the artificial error in the detection process can be reduced, and the reproducibility of the layer thickness detection result is ensured.
In one possible example, the acquiring a plurality of detection regions according to the normal includes: determining that two adjacent marking points of the current marking point are an upper marking point and a lower marking point respectively; determining that an area formed by the normal corresponding to the last marking point and the normal corresponding to the next marking point is a detection area corresponding to the current marking point, wherein the detection area corresponding to the current marking point comprises the current marking point; and repeating the steps until the last mark point is the current mark point.
Since the oxidation block layer is a layer with a shape similar to a circular ring, each mark point in the target image becomes an upper mark point and a lower mark point in the process of determining the detection area. In a specific implementation, an area including a current mark point and formed by normals of two mark points adjacent to the current mark point may be determined as a detection area, for example, two mark points adjacent to the mark point 2 are respectively the mark point 1 and the mark point 3, the normal corresponding to the mark point 1 is a normal a, the normal corresponding to the mark point 3 is a normal b, an image area including the mark point 2 and formed by the normal a and the normal b may be the detection area corresponding to the mark point 2, and so on until all the mark points have a corresponding detection area, that is, the image area of the oxidation block image layer is covered by all the detection areas, and the detection area of course needs to penetrate through the oxidation block image layer. Or, an image area including the current mark point corresponding to the non-adjacent multiple normals may be used as a detection area corresponding to the current mark point, for example, the mark point 1 corresponds to the normal a, the mark point 2 corresponds to the normal b, the mark point 3 corresponds to the normal c, the mark point 4 corresponds to the normal d, and then the image area corresponding to the normal a and the normal d is determined as the detection area corresponding to the mark point 2 and the mark point 3 together, and so on.
Therefore, in the example, the detection area is determined according to the normal line, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the artificial error in the detection process can be reduced, and the reproducibility of the layer thickness detection result can be ensured.
In one possible example, the acquiring a plurality of marker points according to the target image includes: acquiring a plurality of initial mark points according to the target image, wherein the initial mark points are positioned in the target image layer; acquiring a marking line according to the plurality of initial marking points; and acquiring a plurality of marking points according to the marking line, wherein the marking points are uniformly distributed on the marking line.
The initial mark point may be artificially marked in the target image, as shown in fig. 2c, and fig. 2c is a schematic diagram of the initial mark point of the layer thickness detection method provided in the embodiment of the present application. A plurality of initial marking points can be artificially marked on a layer with the thickness to be detected in sequence along the direction of the layer. The connecting lines of the initial mark points can be mark lines, or the initial mark points can be processed and then connected to obtain mark lines, or the connecting lines of the initial mark points can be determined as initial mark lines, and then the initial mark lines are subjected to smoothing processing to obtain mark lines. After the mark line is obtained, the mark line can be equally divided, that is, interpolation is performed based on the arc length of the mark line, so that a plurality of mark points with the same interval can be obtained. The number of the mark points can be determined according to the state of the image, for example, when a clearer image is to be obtained, interpolation can be performed by using a smaller width, that is, the interval between the mark points is short, the number of the mark points is relatively large, and if the state of the image is an image with high noise, a larger width needs to be selected for interpolation, that is, the interval between the mark points is large, and the number of the mark points is relatively small. In a specific implementation, the number of the mark points or the spacing distance between the mark points can be set according to the size of the image pixel.
Therefore, in the example, the initial marking points of the artificial marking are firstly obtained, then the marking lines are obtained according to the initial marking points, and the marking points are obtained according to the marking lines, so that the distribution of the marking points is more uniform, the errors generated during the artificial marking can be reduced, the accuracy of the detection result is improved, and the data precision of the detection result can be improved.
In one possible example, the obtaining a marking line according to the plurality of initial marking points includes: acquiring the centroid of the target layer; acquiring the distances of the plurality of initial mark points relative to the centroid respectively; and smoothing the initial mark point according to the distance to obtain a mark line.
The centroid can be set by a user according to user requirements, and the obtaining of the centroid of the target layer may further include: determining a first initial marker point and a second initial marker point, wherein the first initial marker point is any one of the plurality of initial marker points, and the second initial marker point is the initial marker point which is farthest from the first initial marker point in other initial marker points except the first initial marker point; acquiring a connecting line of the first initial mark point and the second initial mark point; repeating the steps until a connecting line of each initial marking point in the plurality of initial marking points and the initial marking point which is farthest away from the initial marking point is obtained; acquiring a plurality of intersection points according to the connecting lines; and determining the intersection point with the largest passing connecting line in the plurality of intersection points as the centroid. For example, if the continuous intersection point of the initial mark point a and the initial mark point b is O, the continuous intersection point of the initial mark point c and the initial mark point d is P, and the continuous intersection point of the initial mark point e and the initial mark point f is O, the intersection point O can be determined as the centroid.
In a specific implementation, when the initial mark points are smoothed, the distance between each initial mark point and the centroid may be obtained, an average distance is determined according to the distance, and a mark line is determined according to the average distance, that is, the distance from each point on the mark line to the centroid is equal. Or the distance of each initial marking point relative to the centroid can be obtained, a distance interval is obtained according to the distance, the distance interval comprises a preset number of distances, the average value of the distance interval is determined, and a marking line is determined according to the average value. That is, if the distance between the centroid and the initial mark point 1 is 4nm, the distance between the initial mark point 2 is 5nm, the distance between the initial mark point 3 is 3nm, and the distance between the initial mark point 4 is 7nm, the number of the initial mark points in the distance interval between 3nm and 5nm is three, and the distance interval between 3nm and 5nm can be determined to be 3nm to 5nm if the preset number is satisfied, so that the distance from each point on the mark line to the centroid is 4 nm. And when a plurality of distance intervals meeting the preset number are available, determining a marking line according to the distance interval containing the maximum number of the initial marking points, wherein the marking line is the connecting line of the initial marking points after the smoothing treatment. When the initial mark point is smoothed, the smoothing can be realized by a sagvol filter. As shown in fig. 2d, fig. 2d is a schematic diagram of obtaining a mark line of a layer thickness detection method provided in an embodiment of the present application, where (a) in fig. 2d shows a position of an initial mark point, and (a) in fig. 2d shows a position of a five-pointed star as a position of a centroid. After finding the centroid, obtaining the distance between each initial mark point and the centroid, as shown in (b) of fig. 2d, a point corresponding to a dotted line in (b) of fig. 2d is each initial mark point, each initial mark point may be labeled, and then the distance corresponding to the initial mark point is labeled in the coordinate system, and the meaning represented by a point corresponding to a solid line in (b) of fig. 2d is the distance corresponding to each initial mark point after the smoothing processing, and then a mark line may be determined according to the distance after the smoothing processing, as shown in (c) of fig. 2d, a white dotted line in (c) of fig. 2d is a mark line.
Therefore, in the embodiment, the initial mark points are subjected to smoothing treatment according to the distance from each initial mark point to the mass center, and finally the mark line is obtained, so that the position of the mark points which can be obtained subsequently is more reasonable, errors generated during manual marking can be reduced, the accuracy of the detection result is improved, and the data precision of the detection result can be improved.
In a possible example, before determining the layer boundary of the portion of the target layer corresponding to each detection region according to the plurality of first average image intensities in each detection region, the method further includes: determining image noise of a partial image corresponding to each detection area; deleting the detection areas with the image noise higher than the preset noise in the plurality of detection areas.
The target image comprises a plurality of detection areas, the state of the corresponding image in each detection area can be determined, and when the image noise of the image corresponding to the current detection area is too high, the detection area can be deleted, namely, the image in the detection area cannot be processed subsequently, and the layer boundary corresponding to the detection area cannot be acquired.
Therefore, in the embodiment, when the image noise is higher than the preset value, sampling is rejected, so that the accuracy and precision of data processing can be ensured, and the processing efficiency can be improved.
S203, acquiring a first average image intensity of each detection area in the plurality of detection areas.
Wherein each of the detection regions comprises a plurality of first average image intensities. Taking any one of the detection areas as an example, as shown in fig. 2e, fig. 2e is a schematic diagram for determining the layer boundary of the layer thickness detection method provided in the embodiment of the present application, and as shown in (a) of fig. 2e, an image area formed by two black lines in (a) of fig. 2e may be regarded as one detection area, the detection area may be set to include a mark point, the position of the mark point is set to be an origin 0, and then, distances of 80 pixels on both sides of the mark point are respectively determined based on a normal line. This allows a plurality of first average intensities to be obtained within the-80 to 80 pixel distance.
In one possible example, the obtaining the first average image intensity of each of the plurality of detection areas includes: determining a plurality of detection sub-regions contained in each detection region, wherein the plurality of detection sub-regions are distributed along the normal direction corresponding to each detection region; and determining the average image intensity corresponding to each detection subarea as a first average image intensity.
As shown in (a) of fig. 2e, the detection sub-region can be regarded as an image region corresponding to the white line in (a) of fig. 2 e. When acquiring a detection sub-region, a set of points in the detection region having the same distance from the origin may be divided into one detection sub-region. For example, a set of points that are both 30 pixels away from the origin in the current detection region may be determined as one detection sub-region, and then a set of points that are equidistant from the origin by 20 pixels, 15 pixels, or-20 pixels may be sequentially determined as one detection sub-region, so as to obtain a plurality of detection sub-regions in one detection region. The average of the image intensities of all point correspondences contained in each detection sub-region can be regarded as the first average image intensity of each detection sub-region.
In this example, it can be seen that, a plurality of detector sub-regions are divided, and then the first average image intensity of each detector sub-region is determined, so as to obtain the layer boundary of the target layer corresponding to one detector sub-region, which can enhance the accuracy of the determined layer boundary.
And S204, determining the layer boundary of a part of target layers corresponding to each detection area according to the plurality of first average image intensities in each detection area.
The image corresponding to each detection area comprises a part of image layer with the layer thickness needing to be detected, so that the boundary of the part of image layer area with the layer thickness needing to be detected in each detection area can be obtained by taking one detection area as a unit.
In a possible example, the determining, according to the plurality of first average image intensities in each detection area, the layer boundary of the portion of the target layer corresponding to each detection area includes: dividing the plurality of detection sub-regions in each detection region according to the positions of the plurality of detection sub-regions to obtain a first set and a second set; acquiring a first layer boundary according to the first average image intensity and the position of a plurality of detection subregions contained in the first set; acquiring a second layer boundary according to the first average image intensity and the position of the plurality of detection subregions included in the second set; and determining that a first layer boundary of the first set and a second layer boundary of the second set corresponding to each detection area are two layer boundaries of a part of target layers corresponding to each detection area respectively.
As shown in (a) of fig. 2e, since the distance from the origin includes the forward distance and the backward distance, two sets can be obtained according to the position of each detection sub-region, and the position of the detection sub-region may be the distance from the current detection sub-region to the origin. One set includes all detector sub-regions whose distances from the origin are forward distances, and the other set includes all detector sub-regions whose distances from the origin are reverse distances. For example, detector sub-regions at 0 to 80 pixels from the origin are each divided into a first set, and detector sub-regions at-80 to 0 pixels from the origin are each divided into a second set. And determining a first layer boundary corresponding to the first set and a second layer boundary corresponding to the second set according to the first average image intensity and the position of each detection sub-region in the first set and the second set respectively. Because the first set and the second set both correspond to the same detection region, the first layer boundary and the second layer boundary are two layer boundaries of the detection region.
Therefore, in the example, the two layer boundaries are respectively determined, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the artificial error in the detection process can be reduced, and the reproducibility of the layer thickness detection result is ensured.
In a possible example, the obtaining a first layer boundary according to a first average image intensity and a position of a plurality of detector regions included in the first set includes: determining a plurality of coordinate points in a coordinate system according to the first average image intensity and the position corresponding to the plurality of detection subregions included in the first set, wherein the coordinate system is a coordinate system related to the position and the image intensity; performing smoothing treatment on the plurality of coordinate points to obtain a smooth line; acquiring second average image intensity according to the first average image intensity corresponding to the plurality of coordinate points; obtaining a third average image intensity according to the first average image intensity of which the intensity value is greater than the second average image intensity in the first average image intensities corresponding to the plurality of coordinate points; acquiring fourth average image intensity according to the first average image intensity of which the intensity value is not greater than the second average image intensity in the first average image intensities corresponding to the plurality of coordinate points; determining an average value of the third average image intensity and the fourth average image intensity as a target average intensity; and determining a first layer boundary according to the target average intensity and the smooth line.
As shown in fig. 2e, as shown in (a) of fig. 2e, two black lines in (a) of fig. 2e are normal lines, respectively, an area including a current mark point formed by the two normal lines may be regarded as a detection area, a position of the mark point in the detection area may be set as an origin, and then distances of 80 pixels on both sides of the mark point are determined based on the normal lines, respectively. Then, as shown in (b) of fig. 2e and (c) of fig. 2e, the point in the coordinate system shown in (b) of fig. 2e is a coordinate point corresponding to the detection sub-region included in the first set, the abscissa in the coordinate system is the pixel distance from the intensity line to the origin, and may indicate the position of the detection sub-region, and the ordinate is the value of the image intensity, and may indicate the first average image intensity. Similarly, the point in the coordinate system shown in (c) in fig. 2e is a coordinate point corresponding to the detection sub-region included in the second set. By smoothing the points in fig. 2e (b) and fig. 2e (c), a smooth curve can be obtained.
When determining the first layer boundary, first, the first average image intensities corresponding to all coordinate points in the coordinate system may be averaged to obtain a second average image intensity, as shown in (b) of fig. 2e, the intensity value corresponding to the dashed line in the graph is the second average image intensity, and the second average image intensity value in the graph is 4030. After the second average image intensity is obtained, all coordinate points corresponding to the first average image intensity value can be divided into two parts, wherein the first part is a coordinate point with the first average image intensity higher than the second average image intensity, and the second part is a coordinate point with the first average image intensity lower than the second average image intensity. At this time, the average value of the first average image intensities corresponding to the coordinate points in the first portion may be obtained as the second average image intensity, and the average value of the first average image intensities corresponding to the coordinate points in the second portion may be obtained as the third average image intensity. As shown in fig. 2e (b), the intensity values corresponding to the chain line in the figure are the second average image intensity, and the image intensity values corresponding to the chain double-dashed line in the figure are the third average image intensity, namely 4300 and 3800, respectively. At this time, the average value of the second average image intensity and the third average image intensity may be obtained again as the target average image intensity, i.e. the horizontal line in the figure, and the corresponding value of the target average intensity is 4050. And finally, determining that the distance corresponding to the intersection point of the horizontal line and the smooth line corresponding to the target intensity value is the distance of the boundary. As shown in fig. 2e (b), the vertical line in the figure is the corresponding abscissa, i.e. the position of the boundary from the origin, i.e. the boundary is-30 pixels from the origin, i.e. the mark point. In this case, the specific position of the boundary can be determined in (a) in fig. 2e, and the determined boundary can be regarded as the upper boundary of the layer. Similarly, as shown in (c) in fig. 2e, the second layer boundary corresponding to the second set may be obtained in the same manner as described above.
In a specific implementation, obtaining a third average image intensity according to a first average image intensity with an intensity value greater than a second average image intensity in first average image intensities corresponding to the plurality of coordinate points includes: and obtaining a confidence interval with the confidence degree larger than a preset value according to the first average image intensity corresponding to the coordinate point with the first average image intensity higher than the second average image intensity, and determining the average value of all the first average image intensities corresponding to the confidence interval as the second average image intensity. For example, the coordinate points where the first average image intensity is higher than the second average image intensity include 5 coordinate points, and the first average intensity of 90% of the 5 coordinate points is between the confidence intervals [4250,4350], and then the first image intensity is the average of 4250 and 4350, that is, 4300. The value of the second image intensity may also be calculated according to the above steps, which are not described herein again.
Therefore, in the example, the layer boundary is determined according to the first average image intensity, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the artificial errors in the detection process can be reduced, and the reproducibility of the layer thickness detection result is ensured.
S205, determining the layer thickness of the target layer according to a plurality of layer boundaries which are in one-to-one correspondence with the plurality of detection areas.
Because one target image comprises a plurality of detection areas, each detection area only corresponds to part of the target image, that is, one detection area only comprises part of the layer with the thickness required to be detected. After the layer boundaries of the layers included in each detection region are obtained, all the layer boundaries can be combined to determine the layer boundary of the target layer in the target image. When the layer thickness is obtained according to the layer boundary, the layer thickness in each detection area can be calculated, or the layer boundary can be sampled after the layer boundary of the target layer is obtained, and the layer thickness can be obtained according to the layer boundary obtained by sampling. The determining the layer thickness of the target layer according to the layer boundaries corresponding to the detection areas one to one includes: acquiring a first boundary line and a second boundary line of the layer according to the layer boundary; determining an upper intersection point and a lower intersection point of the first boundary line and the second boundary line with the normal corresponding to each marking point respectively; and determining the distance between an upper intersection point and a lower intersection point corresponding to the same marking point as the thickness of the image layer.
In the specific implementation, after the layer thickness of the partial image corresponding to each detection area is obtained, the thickness density distribution proportion is obtained, and the thickness with the largest thickness density distribution proportion is determined to be the thickness of the layer of which the layer thickness is to be detected finally. As shown in fig. 2f, fig. 2f is a schematic diagram for determining a layer thickness of a layer thickness detection method according to an embodiment of the present application, where as shown in (a) in fig. 2f, layer boundaries of a corresponding part of a target layer are obtained according to each detection area, as shown in (b) in fig. 2f, layer thicknesses are respectively obtained for all the obtained layer boundaries, then thickness density distribution ratios of all the layer thicknesses are determined to determine a thickness interval, then a final layer thickness is determined according to the thickness interval, as shown in the diagram, when the thickness distribution ratio of the layer thickness in the thickness interval of 6.0 to 6.5 is the highest, it may be determined that the final layer thickness is between 6.0 and 6.5, at this time, an average value may be obtained for the determined thickness interval, or a median value of the thickness interval is obtained, so as to obtain the final layer thickness.
In a possible example, the determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the plurality of detection regions one to one includes: determining the intensity separation rate of each layer boundary in a plurality of layer boundaries corresponding to each detection area one to one; determining a plurality of target layer boundaries from the plurality of layer boundaries according to the strength separation rate of each layer boundary; and determining the layer thickness of the target image according to the plurality of target layer boundaries.
After determining the layer boundary of a part of the target layer corresponding to each detection region, the determined layer boundary may be verified to determine whether the layer boundary is usable. After two layer boundaries corresponding to one detection area are obtained, taking one layer boundary as an example, the image intensity values corresponding to pixel points outside the layer boundaries are lower than the image intensity values corresponding to pixel points inside the layer boundaries, and the average image intensity of the image region at each layer boundary may be determined first, then respectively determining the proportion of the first average image intensity values corresponding to all the detection subareas which are positioned outside the layer boundary corresponding to the target layer and are lower than or higher than the value of the average image intensity value corresponding to the layer boundary according to the first average image intensity, the ratio is an intensity separation rate, if the intensity separation rate is higher than a preset separation rate, the layer boundary is determined to be an available layer boundary, and if two layer boundaries corresponding to the same detection area are both available layer boundaries, the two layer boundaries can be used for determining the layer thickness. For example, if the first average image intensity corresponding to 90% of the detector sub-regions located outside the layer boundary is lower than the average image intensity at the layer boundary, the layer boundary is an available layer boundary.
Therefore, in the example, the target layer is determined according to the intensity separation rate, and the layer boundary is obtained only according to the target layer, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the artificial error in the detection process can be reduced, and the reproducibility of the layer thickness detection result is ensured.
It can be seen that, in the embodiment of the present application, a target image is first obtained, then the target image is preprocessed to obtain a plurality of detection regions, then a first average image intensity of each detection region in the plurality of detection regions is obtained, where each detection region includes a plurality of first average image intensities, then a layer boundary of a part of a target layer corresponding to each detection region is determined according to the plurality of first average image intensities in each detection region, and finally, a layer thickness of the target layer is determined according to a plurality of layer boundaries corresponding to the plurality of detection regions one to one. Therefore, the accuracy of the obtained layer thickness result can be improved, and the repeatability and reproducibility of the detection result can be ensured because the detection process does not depend on manual measurement of a user.
Referring to fig. 3, fig. 3 is a schematic flow chart of another layer thickness detection method according to an embodiment of the present application, where as shown in the figure, the layer thickness detection method includes the following steps:
s301, acquiring a target image;
s302, acquiring a plurality of mark points according to the target image;
s303, acquiring a normal corresponding to each mark point in the plurality of mark points;
s304, acquiring a plurality of detection areas according to the normal;
s305, acquiring a first average image intensity of each detection area in the plurality of detection areas;
s306, determining layer boundaries of part of target layers corresponding to each detection area according to the plurality of first average image intensities in each detection area;
s307, determining the layer thickness of the target layer according to a plurality of layer boundaries which are in one-to-one correspondence with the plurality of detection areas.
It can be seen that, in this example, the target image is obtained first, then the marker point in the target image is determined, then the corresponding normal line is determined according to the marker point, so as to obtain a plurality of detection areas corresponding to the target image, finally a plurality of first average image intensities corresponding to the detection areas are obtained, and finally the layer boundary is determined.
Consistent with the above embodiment shown in fig. 2a and fig. 3, please refer to fig. 4, and fig. 4 is a schematic structural diagram of an apparatus for detecting a layer thickness according to an embodiment of the present application, where the apparatus 400 for detecting a layer thickness includes a first obtaining unit 401, configured to obtain a target image, where the target image includes a target layer; a preprocessing unit 402, configured to preprocess the target image to obtain a plurality of detection regions, where each detection region in the plurality of detection regions corresponds to a portion of the target layer, and each detection region includes two boundaries of the target layer; a second obtaining unit 403, configured to obtain a first average image intensity of each detection area in the plurality of detection areas, where each detection area includes a plurality of first average image intensities; a first determining unit 404, configured to determine, according to the multiple first average image intensities in each detection area, a layer boundary of a portion of a target layer corresponding to each detection area; a second determining unit 405, configured to determine the layer thickness of the target image according to a plurality of layer boundaries that are in one-to-one correspondence with the plurality of detection areas.
In a possible example, in the aspect of preprocessing the target image to obtain a plurality of detection regions, the preprocessing unit 402 is specifically configured to: acquiring a plurality of mark points according to the target image, wherein the mark points are positioned in the target image layer; acquiring a normal corresponding to each marking point in the plurality of marking points; and acquiring a plurality of detection areas according to the normal, wherein each marking point corresponds to one detection area.
In a possible example, in terms of the acquiring the plurality of detection areas according to the normal, the preprocessing unit 402 is specifically configured to: determining that two adjacent marking points of the current marking point are an upper marking point and a lower marking point respectively; determining that an area formed by the normal corresponding to the last marking point and the normal corresponding to the next marking point is a detection area corresponding to the current marking point, wherein the detection area corresponding to the current marking point comprises the current marking point; determining the next marking point as the current marking point; and repeating the steps until the last mark point is the current mark point.
In one possible example, in terms of the acquiring the plurality of marker points according to the target image, the preprocessing unit 402 is specifically configured to: acquiring a plurality of initial mark points according to the target image, wherein the initial mark points are positioned in the target image layer; acquiring a marking line according to the plurality of initial marking points; and acquiring a plurality of marking points according to the marking line, wherein the marking points are uniformly distributed on the marking line.
In one possible example, in the aspect of obtaining the mark line according to the plurality of initial mark points, the preprocessing unit 402 is specifically configured to: acquiring the centroid of the target layer; acquiring the distances of the plurality of initial mark points relative to the centroid respectively; and smoothing the initial mark point according to the distance to obtain a mark line.
In a possible example, before the determining, according to the plurality of first average image intensities in each detection region, the layer boundary of the portion of the target layer corresponding to each detection region, the apparatus 400 is further configured to: determining image noise of each detection area; deleting the detection areas with the image noise higher than the preset noise in the plurality of detection areas.
In a possible example, in the acquiring the first average image intensity of each of the plurality of detection areas, the second acquiring unit 403 is specifically configured to: determining a plurality of detection sub-regions contained in each detection region, wherein the plurality of detection sub-regions are distributed along the normal direction corresponding to each detection region; and determining the average image intensity corresponding to each detection subarea as a first average image intensity.
In a possible example, in terms of determining, according to the plurality of first average image intensities in each detection area, the layer boundary of the portion of the target layer corresponding to each detection area, the first determining unit 404 is specifically configured to: dividing the plurality of detection sub-regions in each detection region according to the positions of the plurality of detection sub-regions to obtain a first set and a second set; acquiring a first layer boundary according to the first average image intensity and the position of a plurality of detection subregions contained in the first set; acquiring a second layer boundary according to the first average image intensity and the position of the plurality of detection subregions included in the second set; and determining that a first layer boundary of the first set and a second layer boundary of the second set corresponding to each detection area are two layer boundaries of a part of target layers corresponding to each detection area respectively.
In a possible example, in the aspect of obtaining a first layer boundary according to the first average image intensity and the position of the multiple detection sub-regions included in the first set, the first determining unit 404 is specifically configured to: determining a plurality of coordinate points in a coordinate system according to the first average image intensity and the position corresponding to the plurality of detection subregions included in the first set, wherein the coordinate system is a coordinate system related to the position and the image intensity; performing smoothing treatment on the plurality of coordinate points to obtain a smooth line; acquiring second average image intensity according to the first average image intensity corresponding to the plurality of coordinate points; obtaining a third average image intensity according to the first average image intensity of which the intensity value is greater than the second average image intensity in the first average image intensities corresponding to the plurality of coordinate points; acquiring fourth average image intensity according to the first average image intensity of which the intensity value is not greater than the second average image intensity in the first average image intensities corresponding to the plurality of coordinate points; determining an average value of the third average image intensity and the fourth average image intensity as a target average intensity; and determining a first layer boundary according to the target average intensity and the smooth line.
In a possible example, in the aspect of determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the plurality of detection regions one to one, the second determining unit 405 is specifically configured to: determining the intensity separation rate of each layer boundary in a plurality of layer boundaries corresponding to each detection area one to one; determining a plurality of target layer boundaries from the plurality of layer boundaries according to the strength separation rate of each layer boundary; and determining the layer thickness of the target image according to the plurality of target layer boundaries.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
In the case of using an integrated unit, as shown in fig. 5, fig. 5 is a schematic structural diagram of another layer thickness detection apparatus provided in an embodiment of the present application. In fig. 5, an image processing apparatus 500 includes: a processing module 502 and a communication module 501. The processing module 502 is used for control management of actions of the image processing apparatus, for example, control management of the first acquisition unit 401, the preprocessing unit 402, the second acquisition unit 403, the first determination unit 404, and the second determination unit 405 when relevant commands are executed, and/or other processes for executing the techniques described herein. The communication module 401 is configured to support interaction between the layer thickness detection apparatus and other devices. As shown in fig. 5, the layer thickness detecting apparatus may further include a storage module 503, where the storage module 503 is configured to store program codes and data of the layer thickness detecting apparatus.
The Processing module 502 may be a Processor or a controller, and may be, for example, a Central Processing Unit (CPU), a general-purpose Processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication module 501 may be a transceiver, an RF circuit or a communication interface, etc. The storage module 503 may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. Both the layer thickness detection apparatus 400 and the layer thickness detection apparatus 500 may perform the layer thickness detection method shown in fig. 2a and fig. 3.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus and system may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative; for example, the division of the unit is only a logic function division, and there may be another division manner in actual implementation; for example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications can be easily made by those skilled in the art without departing from the spirit and scope of the present invention, and it is within the scope of the present invention to include different functions, combination of implementation steps, software and hardware implementations.
Claims (13)
1. A method for detecting layer thickness is characterized by comprising the following steps:
acquiring a target image, wherein the target image comprises a target image layer;
preprocessing the target image to obtain a plurality of detection areas, wherein each detection area in the plurality of detection areas corresponds to one part of the target image layer, and each detection area comprises two boundaries of the target image layer;
acquiring a first average image intensity of each detection area in the plurality of detection areas, wherein each detection area comprises a plurality of first average image intensities;
determining layer boundaries of part of target layers corresponding to each detection area according to the plurality of first average image intensities in each detection area;
and determining the layer thickness of the target layer according to a plurality of layer boundaries which are in one-to-one correspondence with the plurality of detection areas.
2. The method of claim 1, wherein the preprocessing the target image to obtain a plurality of detection regions comprises:
acquiring a plurality of mark points according to the target image, wherein the mark points are positioned in the target image layer;
acquiring a normal corresponding to each marking point in the plurality of marking points;
and acquiring a plurality of detection areas according to the normal, wherein each marking point corresponds to one detection area.
3. The method of claim 2, wherein said acquiring a plurality of detection regions from said normal comprises:
determining that two adjacent marking points of the current marking point are an upper marking point and a lower marking point respectively;
determining that an area formed by the normal corresponding to the last marking point and the normal corresponding to the next marking point is a detection area corresponding to the current marking point, wherein the detection area corresponding to the current marking point comprises the current marking point;
determining the next marking point as the current marking point;
and repeating the steps until the last mark point is the current mark point.
4. The method of claim 2, wherein the acquiring a plurality of marker points from the target image comprises:
acquiring a plurality of initial mark points according to the target image, wherein the initial mark points are positioned in the target image layer;
acquiring a marking line according to the plurality of initial marking points;
and acquiring a plurality of marking points according to the marking line, wherein the marking points are uniformly distributed on the marking line.
5. The method of claim 4, wherein said obtaining marker lines from said plurality of initial marker points comprises:
acquiring the centroid of the target layer;
acquiring the distances of the plurality of initial mark points relative to the centroid respectively;
and smoothing the initial mark point according to the distance to obtain a mark line.
6. The method according to any one of claims 2 to 5, wherein before determining the layer boundary of the portion of the target layer corresponding to each detection region according to the plurality of first average image intensities in each detection region, the method further comprises:
determining image noise of each detection area;
deleting the detection areas with the image noise higher than the preset noise in the plurality of detection areas.
7. The method of claim 2, wherein said obtaining a first average image intensity for each of said plurality of inspection areas comprises:
determining a plurality of detection sub-regions contained in each detection region, wherein the plurality of detection sub-regions are distributed along the normal direction corresponding to each detection region;
and determining the average image intensity corresponding to each detection subarea as a first average image intensity.
8. The method according to claim 7, wherein determining the layer boundary of the portion of the target layer corresponding to each detection region according to the plurality of first average image intensities in each detection region comprises:
dividing the plurality of detection sub-regions in each detection region according to the positions of the plurality of detection sub-regions to obtain a first set and a second set;
acquiring a first layer boundary according to the first average image intensity and the position of a plurality of detection subregions contained in the first set;
acquiring a second layer boundary according to the first average image intensity and the position of the plurality of detection subregions included in the second set;
and determining that a first layer boundary of the first set and a second layer boundary of the second set corresponding to each detection area are two layer boundaries of a part of target layers corresponding to each detection area respectively.
9. The method according to claim 8, wherein the obtaining the first layer boundary according to the first average image intensity and the position of the plurality of detector sub-regions included in the first set comprises:
determining a plurality of coordinate points in a coordinate system according to the first average image intensity and the position corresponding to the plurality of detection subregions included in the first set, wherein the coordinate system is a coordinate system related to the position and the image intensity;
performing smoothing treatment on the plurality of coordinate points to obtain a smooth line;
acquiring second average image intensity according to the first average image intensity corresponding to the plurality of coordinate points;
obtaining a third average image intensity according to the first average image intensity of which the intensity value is greater than the second average image intensity in the first average image intensities corresponding to the plurality of coordinate points;
acquiring fourth average image intensity according to the first average image intensity of which the intensity value is not greater than the second average image intensity in the first average image intensities corresponding to the plurality of coordinate points;
determining an average value of the third average image intensity and the fourth average image intensity as a target average intensity;
and determining a first layer boundary according to the target average intensity and the smooth line.
10. The method according to claim 1, wherein determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the plurality of detection regions one to one comprises:
determining the intensity separation rate of each layer boundary in a plurality of layer boundaries corresponding to each detection area one to one;
determining a plurality of target layer boundaries from the plurality of layer boundaries according to the strength separation rate of each layer boundary;
and determining the layer thickness of the target image according to the plurality of target layer boundaries.
11. The utility model provides a picture layer thickness detection device which characterized in that includes:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a target image, and the target image comprises a target layer;
the processing unit is used for preprocessing the target image to obtain a plurality of detection areas, each detection area in the plurality of detection areas corresponds to one part of the target image layer, and each detection area comprises two boundaries of the target image layer;
a second acquisition unit configured to acquire a first average image intensity of each of the plurality of detection areas, wherein each of the plurality of detection areas includes a plurality of first average image intensities;
a first determining unit, configured to determine, according to the multiple first average image intensities in each detection area, a layer boundary of a portion of a target layer corresponding to each detection area;
and the second determining unit is used for determining the layer thickness of the target layer according to a plurality of layer boundaries which correspond to the plurality of detection areas one by one.
12. An electronic device comprising a processor, a memory, and a communication interface, the processor and the communication interface each communicatively connected to the memory, the memory storing one or more programs, and the one or more programs executed by the processor, the one or more programs including instructions for performing the steps in the method of any of claims 1-10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program is operable to cause a computer to perform the method according to any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110581681.5A CN113256700B (en) | 2021-05-26 | 2021-05-26 | Method and device for detecting thickness of layer, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110581681.5A CN113256700B (en) | 2021-05-26 | 2021-05-26 | Method and device for detecting thickness of layer, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113256700A true CN113256700A (en) | 2021-08-13 |
CN113256700B CN113256700B (en) | 2023-05-23 |
Family
ID=77184829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110581681.5A Active CN113256700B (en) | 2021-05-26 | 2021-05-26 | Method and device for detecting thickness of layer, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113256700B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107577979A (en) * | 2017-07-26 | 2018-01-12 | 中科创达软件股份有限公司 | DataMatrix type Quick Response Codes method for quickly identifying, device and electronic equipment |
EP3338619A1 (en) * | 2015-08-19 | 2018-06-27 | Kowa Company, Ltd. | Image processing device, image processing method, and image processing program |
CN111681256A (en) * | 2020-05-07 | 2020-09-18 | 浙江大华技术股份有限公司 | Image edge detection method and device, computer equipment and readable storage medium |
CN112150491A (en) * | 2020-09-30 | 2020-12-29 | 小狗电器互联网科技(北京)股份有限公司 | Image detection method, image detection device, electronic equipment and computer readable medium |
CN112150490A (en) * | 2020-09-30 | 2020-12-29 | 小狗电器互联网科技(北京)股份有限公司 | Image detection method, image detection device, electronic equipment and computer readable medium |
CN112818991A (en) * | 2021-02-18 | 2021-05-18 | 长江存储科技有限责任公司 | Image processing method, image processing apparatus, electronic device, and readable storage medium |
US20210150257A1 (en) * | 2019-06-21 | 2021-05-20 | Chengdu Boe Optoelectronics Technology Co., Ltd. | Method and apparatus for evaluating image acquisition accuracy, electronic device and storage medium |
-
2021
- 2021-05-26 CN CN202110581681.5A patent/CN113256700B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3338619A1 (en) * | 2015-08-19 | 2018-06-27 | Kowa Company, Ltd. | Image processing device, image processing method, and image processing program |
US20180232886A1 (en) * | 2015-08-19 | 2018-08-16 | Kowa Company, Ltd. | Image processing apparatus, image processing method, and image processing program |
CN107577979A (en) * | 2017-07-26 | 2018-01-12 | 中科创达软件股份有限公司 | DataMatrix type Quick Response Codes method for quickly identifying, device and electronic equipment |
US20210150257A1 (en) * | 2019-06-21 | 2021-05-20 | Chengdu Boe Optoelectronics Technology Co., Ltd. | Method and apparatus for evaluating image acquisition accuracy, electronic device and storage medium |
CN111681256A (en) * | 2020-05-07 | 2020-09-18 | 浙江大华技术股份有限公司 | Image edge detection method and device, computer equipment and readable storage medium |
CN112150491A (en) * | 2020-09-30 | 2020-12-29 | 小狗电器互联网科技(北京)股份有限公司 | Image detection method, image detection device, electronic equipment and computer readable medium |
CN112150490A (en) * | 2020-09-30 | 2020-12-29 | 小狗电器互联网科技(北京)股份有限公司 | Image detection method, image detection device, electronic equipment and computer readable medium |
CN112818991A (en) * | 2021-02-18 | 2021-05-18 | 长江存储科技有限责任公司 | Image processing method, image processing apparatus, electronic device, and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113256700B (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11367276B2 (en) | Target detection method and apparatus | |
TWI619080B (en) | Method for calculating fingerprint overlapping region and electronic device | |
US20110069892A1 (en) | Method of comparing similarity of 3d visual objects | |
CN111899237B (en) | Scale precision measuring method, apparatus, computer device and storage medium | |
WO2021082922A1 (en) | Method and device for detecting screen display disconnection | |
CN110414649B (en) | DM code positioning method, device, terminal and storage medium | |
CN112347292A (en) | Defect labeling method and device | |
CN113240724B (en) | Thickness detection method and related product | |
CN112419207A (en) | Image correction method, device and system | |
CN106530273B (en) | High-precision FPC (Flexible printed Circuit) linear line detection and defect positioning method | |
CN114332012A (en) | Defect detection method, device, equipment and computer readable storage medium | |
CN112818991B (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
CN113902697A (en) | Defect detection method and related device | |
CN113034527B (en) | Boundary detection method and related product | |
CN113256700A (en) | Layer thickness detection method and device, electronic equipment and readable storage medium | |
CN111340788B (en) | Hardware Trojan horse layout detection method and device, electronic equipment and readable storage medium | |
JP2004536300A (en) | Selection of reference indices that allow quick determination of the position of the imaging device | |
JP2004536300A5 (en) | ||
CN106910196B (en) | Image detection method and device | |
CN116228861A (en) | Probe station marker positioning method, probe station marker positioning device, electronic equipment and storage medium | |
CN116128867A (en) | Method, device, equipment and storage medium for detecting edge sealing defect of plate | |
CN116385415A (en) | Edge defect detection method, device, equipment and storage medium | |
JP7340434B2 (en) | Reinforcement inspection system, reinforcement inspection method, and reinforcement inspection program | |
CN112241697B (en) | Corner color determination method and device, terminal device and readable storage medium | |
CN112102456A (en) | Ceramic wafer height detection method and device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |