CN113256700B - Method and device for detecting thickness of layer, electronic equipment and readable storage medium - Google Patents

Method and device for detecting thickness of layer, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113256700B
CN113256700B CN202110581681.5A CN202110581681A CN113256700B CN 113256700 B CN113256700 B CN 113256700B CN 202110581681 A CN202110581681 A CN 202110581681A CN 113256700 B CN113256700 B CN 113256700B
Authority
CN
China
Prior art keywords
detection
layer
target
image
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110581681.5A
Other languages
Chinese (zh)
Other versions
CN113256700A (en
Inventor
法提·奥尔梅兹
周静兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze Memory Technologies Co Ltd
Original Assignee
Yangtze Memory Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze Memory Technologies Co Ltd filed Critical Yangtze Memory Technologies Co Ltd
Priority to CN202110581681.5A priority Critical patent/CN113256700B/en
Publication of CN113256700A publication Critical patent/CN113256700A/en
Application granted granted Critical
Publication of CN113256700B publication Critical patent/CN113256700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/22Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
    • G01N23/225Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion
    • G01N23/2251Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material using electron or ion using incident electron beams, e.g. scanning electron microscopy [SEM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/60Specific applications or type of materials
    • G01N2223/633Specific applications or type of materials thickness, density, surface weight (unit area)
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application provides a layer thickness detection method and device, electronic equipment and a readable storage medium, wherein the layer thickness detection method comprises the following steps: acquiring a target image; preprocessing the target image to obtain a plurality of detection areas corresponding to the target image; acquiring a first average image intensity of each detection region of the plurality of detection regions, wherein each detection region comprises a plurality of first average image intensities; determining the layer boundary of the partial image corresponding to each detection area according to the plurality of first average image intensities of each detection area; and determining the layer thickness of the target image according to a plurality of layer boundaries corresponding to the detection areas one by one. In this way, the accuracy of the obtained layer thickness result can be improved, and the reproducibility and reproducibility of the detection result can be ensured because the detection process does not depend on manual measurement of a user.

Description

Method and device for detecting thickness of layer, electronic equipment and readable storage medium
Technical Field
The application relates to the technical field of image processing, in particular to a method and a device for detecting thickness of a layer, electronic equipment and a readable storage medium.
Background
Currently, thickness detection of oxide block layers in ONOP (O for oxide, N for nitride, and P for polysilicon) via structures relies on the use of computer vision edge detection techniques to approximately determine layer boundaries, followed by manual calculation of image thickness with rough boundaries. Thickness measurement is accomplished by manually drawing a line of distance measurement between the rough boundaries and sampling the layer thickness in various portions of the image. However, the computer vision edge detection technology can reliably work only when the image is sufficiently clear, and the structure of the ONOP channel hole is tiny, so that the microscopic image is very noisy, and when the thickness is measured by a user, the thickness is always detected manually by selecting the clearest part of the image, so that the clear part of the microscopic image is over-sampled, and the noise area is not sufficiently sampled, thereby causing the problems of large measurement error and irreproducibility of the measured value.
Disclosure of Invention
The embodiment of the application provides a method and a device for detecting thickness of a layer, electronic equipment and a readable storage medium, so as to reduce measurement errors and improve accuracy of thickness detection.
In a first aspect, an embodiment of the present application provides a method for detecting a thickness of a layer of a graphic, including:
Acquiring a target image, wherein the target image comprises a target image layer;
preprocessing the target image to obtain a plurality of detection areas, wherein each detection area in the plurality of detection areas corresponds to a part of the target image layer, and each detection area comprises two boundaries of the target image layer;
acquiring a first average image intensity of each detection region of the plurality of detection regions, wherein each detection region comprises a plurality of first average image intensities;
determining the layer boundary of a part of target layers corresponding to each detection area according to the plurality of first average image intensities in each detection area;
and determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the detection areas one by one.
In a second aspect, an embodiment of the present application provides a layer thickness detection apparatus, including:
the first acquisition unit is used for acquiring a target image, wherein the target image comprises a target image layer;
the processing unit is used for preprocessing the target image to obtain a plurality of detection areas, each detection area in the plurality of detection areas corresponds to a part of the target image layer, and each detection area comprises two boundaries of the target image layer;
A second acquisition unit configured to acquire a first average image intensity of each of the plurality of detection areas, wherein each detection area includes a plurality of first average image intensities;
a first determining unit, configured to determine a layer boundary of a part of the target layer corresponding to each detection area according to a plurality of first average image intensities in each detection area;
and the second determining unit is used for determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the detection areas one by one.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing steps in any of the methods of the first or second aspects described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform some or all of the steps as described in any of the methods of the first or second aspects of embodiments of the present application.
It can be seen that in this embodiment of the present application, a target image is first acquired, then the target image is preprocessed to obtain a plurality of detection areas, then a first average image intensity of each detection area in the plurality of detection areas is acquired, where each detection area includes a plurality of first average image intensities, then a layer boundary of a portion of the target layer corresponding to each detection area is determined according to the plurality of first average image intensities in each detection area, and finally a layer thickness of the target layer is determined according to a plurality of layer boundaries corresponding to the plurality of detection areas one to one. In this way, the accuracy of the obtained layer thickness result can be improved, and the reproducibility and reproducibility of the detection result can be ensured because the detection process does not depend on manual measurement of a user.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 1b is a schematic structural diagram of another electronic device according to an embodiment of the present application;
FIG. 1c is a schematic view of an ONOP channel hole according to an embodiment of the disclosure;
FIG. 1d is a schematic view of an oxide block micrograph provided in an embodiment of the present application;
FIG. 2a is a schematic flow chart of a method for detecting thickness of a layer according to an embodiment of the present application;
FIG. 2b is a schematic diagram of normal line acquisition of a method for detecting thickness of a layer according to an embodiment of the present application;
FIG. 2c is a schematic diagram of an initial mark point of a method for detecting a thickness of a layer according to an embodiment of the present application;
FIG. 2d is a schematic diagram of obtaining a mark line of a method for detecting a thickness of a layer according to an embodiment of the present application;
FIG. 2e is a schematic diagram illustrating layer boundary determination according to an embodiment of the present disclosure;
FIG. 2f is a schematic diagram of layer thickness determination according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another method for detecting thickness of a layer according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a layer thickness detection device according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of another layer thickness detection device according to an embodiment of the present application.
Detailed Description
In order to make the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to better understand the technical solutions of the embodiments of the present application, the following describes electronic devices and a layer thickness detection system that may be related to the embodiments of the present application.
Referring to fig. 1a, fig. 1a is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 101 includes a layer thickness detection device 102, and the layer thickness detection device 102 is configured to detect a layer thickness in an image. The layer thickness detection device 102 performs preprocessing on a target image acquired by the electronic device 101, and detects the layer thickness of the target image based on the preprocessed target image. After the electronic device 101 acquires the layer thickness determined by the layer thickness detection device 102, the detection result may be sent to other corresponding electronic devices, or the detection result may be directly displayed on the screen of the electronic device 101.
Specifically, the electronic device described in fig. 1a may further include the following structure, referring to fig. 1b, and fig. 1b is a schematic structural diagram of another electronic device according to an embodiment of the present application. As shown, the electronic device may implement steps in the present layer thickness detection method, the electronic device 100 includes an application processor 120, a memory 130, a communication interface 140, and one or more programs 131, where the one or more programs 131 are stored in the memory 130 and configured to be executed by the application processor 120, and the one or more programs 131 include instructions for performing any of the following method embodiments.
The communication unit is used for supporting the communication between the first electronic device and other devices. The terminal may further comprise a memory unit for storing program codes and data of the terminal.
The processing unit may be an Application processor 120 or a controller, such as a central processing unit (CentralProcessing Unit, CPU), a general purpose processor, a digital signal processor (DigitalSignal Processor, DSP), an Application-specific integrated circuit (ASIC), a field programmable gate array (FieldProgrammable Gate Array, FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, units and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like. The communication unit may be a communication interface 140, a transceiver, a transceiving circuit, etc., and the storage unit may be a memory 130.
The memory 130 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (programmableROM, PROM), an erasable programmable read-only memory (erasablePROM, EPROM), an electrically erasable programmable read-only memory (electricallyEPROM, EEPROM), or a flash memory, among others. The volatile memory may be random access memory (randomaccess memory, RAM) which acts as an external cache. By way of example but not limitation, many forms of random access memory (randomaccess memory, RAM) are available, such as static RAM, SRAM, dynamic Random Access Memory (DRAM), synchronous dynamic random access memory (synchronousDRAM, SDRAM), double data rate synchronous dynamic random access memory (ddr SDRAM), enhanced synchronous dynamic random access memory (enhancedSDRAM, ESDRAM), synchronous link dynamic random access memory (synchlinkDRAM, SLDRAM), and direct memory bus RAM (DRRAM).
In a specific implementation, the application processor 120 is configured to perform any step performed by the electronic device in the method embodiment described below, and when performing data transmission such as sending, the communication interface 140 is optionally invoked to complete the corresponding operation.
Referring to fig. 1c, fig. 1c is a schematic view of an ono hole structure according to an embodiment of the present application, where (a) in fig. 1c is a perspective view of the ono hole structure, and (b) in fig. 1c is a cross-sectional view of the ono hole structure, and it can be seen that the ono hole structure includes a multi-layer structure, and each layer can be regarded as an approximate ring. Referring to fig. 1d, fig. 1d is a schematic diagram of an oxide block microscopic image provided in the embodiment of the present application, and an approximate circle indicated by an arrow in fig. 1d is an oxide block layer, which can be seen that the radius of a channel hole in the ONOP channel hole structure is small, and the microscopic image is very noisy. The thickness detection of the oxide block layer in the ONOP channel hole structure at present depends on the use of a computer vision edge detection technology to roughly determine the layer boundary, and then the image thickness with a rough boundary is manually calculated, but because the small-radius microscopic image of the channel hole is not clear, and the thickness detection is mainly performed by manual operation, the detected result has larger error, and the reproducibility of the measurement result cannot be realized.
With reference to the foregoing description, the steps of the method for detecting a layer thickness of a layer will be described from the point of view of an example method, and referring to fig. 2a, fig. 2a is a schematic flow chart of a method for detecting a layer thickness of a layer according to an embodiment of the present application. As shown in the figure, the layer thickness detection method includes:
s201, acquiring a target image.
The target image includes a target layer, that is, the target image may be an image including a layer whose thickness needs to be measured, and the target image may also be an image corresponding to the layer whose thickness needs to be detected. The target image may be a microscopic image obtained by a microscope, for example, a target image obtained by a transmission electron microscope (TransmissionElectron Microscope, TEM), a scanning electron microscope (scanningelectron microscope, SEM), or the like.
S202, preprocessing the target image to obtain a plurality of detection areas.
Wherein each detection region of the plurality of detection regions corresponds to a portion of the target layer, and each detection region includes two boundaries of the target layer. The target image contains an oxide block layer with the thickness to be detected, and the oxide block layer is in a shape similar to a circular ring in the target image, so that the detection area corresponds to the oxide block layer and is also in circular ring arrangement. That is, the oxide block layer is covered with the entire detection region.
In one possible example, the preprocessing the target image to obtain a plurality of detection areas corresponding to the target image includes: acquiring a plurality of mark points according to the target image, wherein the mark points are positioned in the target image layer; acquiring a normal corresponding to each marking point in the plurality of marking points; and acquiring a plurality of detection areas according to the normal, wherein each marking point corresponds to one detection area.
The marking points are located in the oxide block layer, and the oxide block layer can be divided into a plurality of sections by the marking points, namely the plurality of marking points are connected to form a ring-like shape. Each mark point has a normal corresponding to the mark pointOne detection area may be an area composed of two of the plurality of normals. As shown in fig. 2b, fig. 2b is a schematic diagram of normal line acquisition of a layer thickness detection method according to an embodiment of the present application, where (a) in fig. 2b includes 3 mark points, a (x 1 ,y 1 ),B(x 2 ,y 2 ),C(x 3 ,y 3 ) In this case, if the normal line of the mark point B is to be obtained, the connection line between the mark point a and the mark point C may be obtained first, then, the parallel line having the intersection point with the mark point B among the parallel lines of the connection line is determined as the tangent line of the mark point B, and then, the normal line of the mark point B may be obtained by making the perpendicular line of the tangent line through the mark point B, and so on, the normal line of each mark point may be obtained, as shown in (B) in fig. 2B.
Therefore, the normal line is determined according to the mark points, the detection areas are determined according to the normal line, and the thickness of the layer is determined according to each detection area, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the human error in the detection process can be reduced, and the reproducibility of the thickness detection result of the layer is ensured.
In one possible example, the acquiring a plurality of detection areas according to the normal line includes: determining that two adjacent marking points of the current marking point are a last marking point and a next marking point respectively; determining an area formed by the normal corresponding to the last mark point and the normal corresponding to the next mark point as a detection area corresponding to the current mark point, wherein the detection area corresponding to the current mark point comprises the current mark point; repeating the steps until the last marked point is the current marked point.
Since the oxide block layer is a layer with a shape similar to a circular ring, each mark point in the target image becomes an upper mark point and a lower mark point in the process of determining the detection area. In a specific implementation, an area including the current mark point formed by normals of two mark points adjacent to the current mark point may be determined as a detection area, for example, two mark points adjacent to the mark point 2 are respectively a mark point 1 and a mark point 3, the normal corresponding to the mark point 1 is a normal a, the normal corresponding to the mark point 3 is a normal b, at this time, the image area including the mark point 2 formed by the normal a and the normal b may be the detection area corresponding to the mark point 2, and so on, until all the mark points have a corresponding detection area, that is, the image area of the oxide block layer is covered by all the detection areas, and of course, the detection area needs to penetrate through the oxide block layer. The image area including the current mark point corresponding to the non-adjacent normal lines may be used as a detection area corresponding to the current mark point, for example, the image area corresponding to the normal line a corresponding to the mark point 1, the image area corresponding to the normal line b corresponding to the mark point 2, the image area corresponding to the normal line c corresponding to the mark point 3, and the image area corresponding to the normal line d corresponding to the mark point 4 may be determined as a detection area corresponding to the mark point 2 and the mark point 3 together, and so on.
Therefore, in this example, the detection area is determined according to the normal line, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the human error in the detection process can be reduced, and the reproducibility of the layer thickness detection result can be ensured.
In one possible example, the acquiring a plurality of marker points according to the target image includes: acquiring a plurality of initial mark points according to the target image, wherein the initial mark points are positioned in the target image layer; acquiring mark lines according to the plurality of initial mark points; and obtaining a plurality of marking points according to the marking line, wherein the marking points are uniformly distributed on the marking line.
The initial marking point may be manually marked in the target image, as shown in fig. 2c, and fig. 2c is a schematic diagram of the initial marking point of the method for detecting the thickness of the layer according to the embodiment of the present application. Multiple initial marking points can be artificially marked on a layer needing to be detected in thickness in sequence along the direction of the layer. The connection line of the plurality of initial marking points can be a marking line, or the initial marking points can be processed and then connected to obtain the marking line, or the connection line of the plurality of initial marking points can be determined to be the initial marking line, and then the initial marking line is smoothed to obtain the marking line. After the marker line is obtained, the marker line may be equally divided, that is, interpolated based on the arc length of the marker line, so that a plurality of marker points with the same interval may be obtained. The number of the marking points can be determined according to the state of the image, for example, when a clearer image is to be obtained, interpolation can be performed by using smaller widths, that is, the interval between the marking points is short, the number of the marking points is relatively large, and if the state of the image is an image with high noise points, interpolation needs to be performed by selecting larger widths, that is, the interval between the marking points is large, and the number of the marking points is relatively small. In a specific implementation, the number of marking points or the spacing distance between marking points may be set according to the size of the image pixels.
Therefore, in this example, the initial mark point of the manual mark is obtained first, then the mark line is obtained according to the initial mark point, and the mark point is obtained according to the mark line, so that the distribution of the mark points is more uniform, errors generated during manual marking can be reduced, the accuracy of the detection result can be improved, and the data precision of the detection result can be improved.
In one possible example, the obtaining a marker line according to the plurality of initial marker points includes: acquiring the mass center of the target layer; obtaining distances of the plurality of initial mark points relative to the mass center respectively; and carrying out smoothing treatment on the initial mark points according to the distance to obtain mark lines.
The centroid can be set by a user according to user requirements, and the acquiring the centroid of the target layer can further comprise: determining a first initial marking point and a second initial marking point, wherein the first initial marking point is any one of the initial marking points, and the second initial marking point is the initial marking point farthest from the first initial marking point in the initial marking points except the first initial marking point; acquiring a connecting line of the first initial mark point and the second initial mark point; repeating the steps until the connecting line of each initial marking point in the plurality of initial marking points and the initial marking point with the farthest corresponding distance is obtained; acquiring a plurality of intersection points according to the plurality of connecting lines; and determining the intersection point with the most connecting lines passing through among the plurality of intersection points as a centroid. For example, if the intersection point of the succession of the initial mark point a and the initial mark point b is O, the intersection point of the succession of the initial mark point c and the initial mark point d is P, and the intersection point of the succession of the initial mark point e and the initial mark point f is O, it is possible to determine the intersection point O as the centroid.
In a specific implementation, when the initial mark points are smoothed, the distance between each initial mark point and the centroid can be obtained, an average distance is determined according to the distance, and a mark line is determined according to the average distance, namely, the distances from each point on the mark line to the centroid are equal. Or the distance between each initial mark point and the mass center can be obtained, a distance interval is obtained according to the distance, the distance interval contains a preset number of distances, the average value of the distance interval is determined, and a mark line is determined according to the average value. That is, if the distance between the centroid and the initial marker point 1 is 4nm, the distance between the initial marker point 2 is 5nm, the distance between the initial marker point 3 is 3nm, and the distance between the initial marker point 4 is 7nm, the number of initial marker points in the distance range of 3nm to 5nm is three, and the preset number is satisfied, and the distance range can be determined to be 3nm to 5nm, so that the distance between each point on the marker line and the centroid is 4nm. And determining a marking line according to the distance interval with the largest number of the initial marking points when a plurality of distance intervals meeting the preset number exist, wherein the marking line is the connecting line of the initial marking points after the smoothing treatment. The smoothing of the initial marker points can also be achieved by a sagvol filter. As shown in fig. 2d, fig. 2d is a schematic diagram of obtaining a marker line of a method for detecting a layer thickness according to an embodiment of the present application, where the position of the initial marker point is shown in (a) in fig. 2d, and where the position of the five-pointed star in (a) in fig. 2d is the centroid. After the centroid is found, as shown in (b) of fig. 2d, the point corresponding to the dotted line in (b) of fig. 2d is each initial mark point, the distance corresponding to the initial mark point may be marked in the coordinate system, the meaning represented by the point corresponding to the solid line in (b) of fig. 2d is the smoothed distance corresponding to each initial mark point, and then the mark line may be determined according to the smoothed distance, as shown in (c) of fig. 2d, the white dotted line in (c) of fig. 2d is the mark line.
Therefore, in this example, the initial mark points are smoothed according to the distance from each initial mark point to the centroid, and finally the mark line is obtained, so that the positions of the mark points which can be obtained later are more reasonable, errors generated during manual marking can be reduced, the accuracy of the detection result can be improved, and the data precision of the detection result can be improved.
In one possible example, before the determining, according to the plurality of first average image intensities in each detection area, a layer boundary of a portion of the target layer corresponding to each detection area, the method further includes: determining the image noise of the partial image corresponding to each detection area; and deleting the detection areas, of which the image noise is higher than preset noise, in the detection areas.
The target image comprises a plurality of detection areas, the state of the corresponding image in each detection area can be determined, and when the image noise of the image corresponding to the current detection area is too high, the detection area can be deleted, that is, the image in the detection area is not processed later, and the image layer boundary corresponding to the detection area is not acquired.
In this example, the sampling is refused when the image noise is higher than the preset value, so that the accuracy and precision of the data processing can be ensured, and the processing efficiency can be improved.
S203, acquiring a first average image intensity of each detection area in the plurality of detection areas.
Wherein each detection region comprises a plurality of first average image intensities. Taking any one of the detection areas as an example, as shown in fig. 2e, fig. 2e is a schematic diagram of layer boundary determination of a layer thickness detection method according to an embodiment of the present application, as shown in fig. 2e (a), an image area formed by two black lines in fig. 2e (a) may be regarded as a detection area, a mark point may be set in the detection area, where the position of the mark point is an origin 0, and then distances of 80 pixels on both sides of the mark point are respectively determined based on normals. Thus, a plurality of first average intensities can be obtained over the distance of-80 to 80 pixels.
In one possible example, the acquiring a first average image intensity for each of the plurality of detection regions includes: determining detection subareas contained in each detection area, wherein the detection subareas comprise a plurality of detection subareas, and the detection subareas are distributed along the normal direction corresponding to each detection area; and determining the average image intensity corresponding to each detection subarea as a first average image intensity.
As shown in fig. 2e (a), the detection sub-region can be regarded as an image region corresponding to the white line in fig. 2e (a). When the detection sub-region is acquired, a set of points in the detection region that are the same distance from the origin may be divided into one detection sub-region. For example, a set of points in the current detection region, each of which is 30 pixels away from the origin, may be determined as one detection sub-region, and then a set of points equidistant from the origin by 20 pixels, 15 pixels, and-20 pixels may be sequentially determined as one detection sub-region, so as to obtain a plurality of detection sub-regions in one detection region. The average value of the image intensities for all points contained in each detection sub-area can be regarded as the first average image intensity for each detection sub-area.
In this example, a plurality of detection sub-areas are divided, and then the first average image intensity of each detection sub-area is determined, so as to obtain the layer boundary of the target layer corresponding to one detection area, so that the accuracy of the determined layer boundary can be enhanced.
S204, determining the layer boundary of the partial target layer corresponding to each detection area according to the plurality of first average image intensities in each detection area.
The image corresponding to each detection area comprises a part of image of the layer needing to detect the thickness of the layer, so that the boundary of the part of layer area needing to detect the thickness of the layer in each detection area can be obtained by taking one detection area as a unit.
In one possible example, the determining, according to the plurality of first average image intensities in each detection area, a layer boundary of a portion of the target layer corresponding to each detection area includes: dividing the detection subareas in each detection area according to the positions of the detection subareas to obtain a first set and a second set; acquiring a first layer boundary according to first average image intensities and positions of a plurality of detection subareas contained in the first set; acquiring a second image layer boundary according to the first average image intensity and the positions of the detection subareas contained in the second set; and determining the first layer boundary of the first set and the second layer boundary of the second set corresponding to each detection region as two layer boundaries of the partial target layer corresponding to each detection region.
Wherein, as shown in (a) of fig. 2e, since the distance from the origin point includes a forward distance and a reverse distance, two sets can be obtained according to the position of each detection sub-region, and the position of the detection sub-region can be the distance from the origin point to the current detection sub-region. One set includes all detection sub-areas at forward distances from the origin, and the other set includes all detection sub-areas at reverse distances from the origin. For example, the detection sub-regions at a distance of 0 to 80 pixels from the origin are all divided into a first set, and the detection sub-regions at a distance of-80 to 0 pixels from the origin are all divided into a second set. And determining a first layer boundary corresponding to the first set and a second layer boundary corresponding to the second set according to the first average image intensity and the position of each detection subarea in the first set and the second set respectively. Because the first set and the second set both correspond to the same detection area, the first layer boundary and the second layer boundary are two layer boundaries of the detection area.
Therefore, in this example, two layer boundaries are respectively determined, so that accuracy of the detection result can be improved, data precision of the detection result can be improved, human errors in the detection process can be reduced, and reproducibility of the layer thickness detection result is ensured.
In one possible example, the acquiring a first layer boundary according to the first average image intensity and the positions of the plurality of detection sub-areas included in the first set includes: determining a plurality of coordinate points in a coordinate system according to first average image intensities and positions corresponding to a plurality of detection subareas included in the first set, wherein the coordinate system is a coordinate system related to the positions and the image intensities; carrying out smoothing treatment on the coordinate points to obtain a smooth line; acquiring second average image intensity according to the first average image intensity corresponding to the coordinate points; acquiring third average image intensity according to the first average image intensity, of which the intensity value is larger than the second average image intensity, in the first average image intensity corresponding to the coordinate points; acquiring fourth average image intensity according to the first average image intensity, of which the intensity value is not larger than the second average image intensity, in the first average image intensity corresponding to the coordinate points; determining an average value of the third average image intensity and the fourth average image intensity as a target average intensity; and determining a first layer boundary according to the target average intensity and the smooth line.
As shown in fig. 2e, as shown in (a) in fig. 2e, two black lines in (a) in fig. 2e are normal lines, an area including a current mark point formed by the two normal lines can be regarded as a detection area, a position of the mark point in the detection area can be set as an origin, and distances of 80 pixels on both sides of the mark point are determined based on the normal lines. Then, according to the above embodiment, the first set and the second set are obtained, as shown in (b) in fig. 2e and (c) in fig. 2e, where the point in the coordinate system shown in (b) in fig. 2e is the coordinate point corresponding to the detected sub-area included in the first set, the abscissa in the coordinate system is the pixel distance from the intensity line to the origin, which may indicate the position of the detected sub-area, and the ordinate is the value of the image intensity, which may indicate the first average image intensity. Similarly, the points in the coordinate system shown in fig. 2e (c) are coordinate points corresponding to the detection sub-areas included in the second set. Smoothing the points in fig. 2e (b) and fig. 2e (c), respectively, may result in a smoothed curve.
When determining the boundary of the first image layer, the first average image intensities corresponding to all coordinate points in the coordinate system may be averaged to obtain the second average image intensity, as shown in (b) of fig. 2e, where the intensity value corresponding to the dashed line is the second average image intensity, and the second average image intensity value is 4030. After the second average image intensity is obtained, all coordinate points corresponding to the first average image intensity values can be divided into two parts, wherein the first part is a coordinate point with the first average image intensity higher than the second average image intensity, and the second part is a coordinate point with the first average image intensity lower than the second average image intensity. At this time, an average value of the first average image intensities corresponding to the coordinate points in the first portion may be acquired as the second average image intensity, and an average value of the first average image intensities corresponding to the coordinate points in the second portion may be acquired as the third average image intensity. As shown in (b) of fig. 2e, the intensity value corresponding to the dash-dot line in the figure is the second average image intensity, and the image intensity value corresponding to the two-dot line in the figure is the third average image intensity, that is, 4300 and 3800, respectively. At this time, the average value of the second average image intensity and the third average image intensity may be obtained again as the target average image intensity, that is, the horizontal line in the figure, and the corresponding value of the target average intensity is 4050. And finally, determining the distance corresponding to the intersection point of the transverse line and the smooth line corresponding to the target intensity value as the distance of the boundary. As shown in fig. 2e (b), the vertical line in the figure is the corresponding abscissa, which is the position of the boundary from the origin, i.e., the boundary is-30 pixels from the origin, i.e., the mark point. The specific location of the boundary can be determined in fig. 2e (a), and the boundary thus determined can be regarded as the upper boundary of the layer. Similarly, as shown in fig. 2e (c), for the second layer boundary corresponding to the second set, the second layer boundary may be obtained in the same manner as described above.
In a specific implementation, obtaining a third average image intensity according to a first average image intensity with an intensity value greater than the second average image intensity in the first average image intensities corresponding to the coordinate points includes: and acquiring a confidence interval with confidence coefficient larger than a preset value according to the first average image intensity corresponding to the coordinate point with the first average image intensity higher than the second average image intensity, and determining that the average value of all the first average image intensities corresponding to the confidence interval is the second average image intensity. For example, the coordinate points where the first average image intensity is higher than the second average image intensity include 5, and the first average intensity of 90% of the coordinate points in the 5 coordinate points is between the confidence intervals [4250,4350], then the first image intensity at this time is the average of 4250 and 4350, that is 4300. The value of the second image intensity may also be calculated according to the above steps, which will not be described in detail herein.
Therefore, in this example, the layer boundary is determined according to the first average image intensity, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the human error in the detection process can be reduced, and the reproducibility of the layer thickness detection result can be ensured.
S205, determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the detection areas one by one.
Because one target image includes a plurality of detection areas, each detection area only corresponds to a part of the target image, that is, one detection area only includes a part of the layer of which the thickness needs to be detected. After the layer boundaries of the layers contained in each detection area are obtained, all the layer boundaries can be combined to determine the layer boundary of the target layer in the target image. When the layer thickness is obtained according to the layer boundary, the layer thickness in each detection area can be calculated, or the layer boundary of the target layer can be sampled after the layer boundary of the target layer is obtained, and the layer thickness is obtained according to the sampled layer boundary. The determining the layer thickness of the target layer according to the layer boundaries corresponding to the detection areas one to one includes: acquiring a first boundary line and a second boundary line of the layer according to the layer boundary; determining an upper intersection point and a lower intersection point of the first boundary line and the second boundary line with the normal corresponding to each marking point respectively; and determining the distance between the upper intersection point and the lower intersection point corresponding to the same mark point as the thickness of the layer.
In the specific implementation, after the thickness of the layer of the partial image corresponding to each detection area is obtained, the thickness density distribution proportion is obtained, and the thickness with the largest thickness density distribution proportion is determined as the thickness of the layer of which the thickness is required to be detected finally. As shown in fig. 2f, fig. 2f is a schematic diagram for determining the thickness of a layer according to the thickness detection method provided in the embodiment of the present application, as shown in fig. 2f (a), a layer boundary of a corresponding partial target layer is obtained according to each detection area, as shown in fig. 2f (b), the thickness of all the obtained layer boundaries are calculated respectively, then the thickness density distribution ratio of all the layer thicknesses is determined to determine a thickness interval, then the final layer thickness is determined according to the thickness interval, as shown in the figure, the thickness distribution ratio of the layer thickness in the thickness interval of 6.0-6.5 is the highest, the final layer thickness can be determined to be between 6.0-6.5, and at this time, the average value of the determined thickness interval or the median value of the thickness interval can be calculated, so as to obtain the final layer thickness.
In one possible example, the determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the plurality of detection areas one-to-one includes: determining the intensity separation rate of each layer boundary in a plurality of layer boundaries corresponding to each detection region one by one; determining a plurality of target layer boundaries from the plurality of layer boundaries according to the intensity separation rate of each layer boundary; and determining the layer thickness of the target image according to the target layer boundaries.
After determining the layer boundary of the partial target layer corresponding to each detection area, the determined layer boundary may be verified to determine whether the layer boundary is available. After two image layer boundaries corresponding to one detection area are obtained, taking one image layer boundary as an example, the image intensity value corresponding to a pixel point outside the image layer boundary is lower than the image intensity value corresponding to a pixel point inside the image layer boundary, the average image intensity of the image area at each image layer boundary can be determined first, then the proportion of the values of the first average image intensities corresponding to all the detection subareas outside the image layer boundary corresponding to the target image layer, which are lower or higher than the average image intensity corresponding to the image layer boundary, is the intensity separation rate, if the intensity separation rate is higher than the preset separation rate, the image layer boundary is determined to be the available image layer boundary, and if the two image layer boundaries corresponding to the same detection area are both available image layer boundaries, the two image layer boundaries can be used for determining the image layer thickness. For example, if 90% of the detection sub-regions located outside the layer boundary correspond to a first average image intensity that is lower than the average image intensity at the layer boundary, the layer boundary is the available layer boundary.
Therefore, in this example, the target layer is determined according to the intensity separation rate, and the layer boundary is obtained only according to the target layer, so that the accuracy of the detection result can be improved, the data precision of the detection result can be improved, the human error in the detection process can be reduced, and the reproducibility of the layer thickness detection result can be ensured.
It can be seen that in this embodiment of the present application, a target image is first acquired, then the target image is preprocessed to obtain a plurality of detection areas, then a first average image intensity of each detection area in the plurality of detection areas is acquired, where each detection area includes a plurality of first average image intensities, then a layer boundary of a portion of the target layer corresponding to each detection area is determined according to the plurality of first average image intensities in each detection area, and finally a layer thickness of the target layer is determined according to a plurality of layer boundaries corresponding to the plurality of detection areas one to one. In this way, the accuracy of the obtained layer thickness result can be improved, and the reproducibility and reproducibility of the detection result can be ensured because the detection process does not depend on manual measurement of a user.
Referring to fig. 3, fig. 3 is a flowchart of another method for detecting a layer thickness according to an embodiment of the present application, as shown in the drawing, the method for detecting a layer thickness includes the following steps:
s301, acquiring a target image;
s302, acquiring a plurality of mark points according to the target image;
s303, acquiring a normal corresponding to each marking point in the plurality of marking points;
s304, acquiring a plurality of detection areas according to the normal;
s305, acquiring a first average image intensity of each detection area in the plurality of detection areas;
s306, determining the layer boundary of a part of target layers corresponding to each detection area according to the plurality of first average image intensities in each detection area;
s307, determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the detection areas one by one.
In this example, the target image is first obtained, then the mark point in the target image is determined, then the normal corresponding to the mark point is determined according to the mark point, so as to obtain a plurality of detection areas corresponding to the target image, finally a plurality of first average image intensities corresponding to the detection areas are obtained, and finally the layer boundary is determined, so that the accuracy of the obtained layer thickness result can be improved, and the reproducibility repeatability and reproducibility of the detection result can be ensured because the detection process does not depend on manual measurement of a user.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a layer thickness detection device according to an embodiment of the present application, where the layer thickness detection device 400 includes a first obtaining unit 401, configured to obtain a target image, where the target image includes a target layer; a preprocessing unit 402, configured to preprocess the target image to obtain a plurality of detection areas, where each detection area in the plurality of detection areas corresponds to a portion of the target layer, and each detection area includes two boundaries of the target layer; a second acquisition unit 403 configured to acquire a first average image intensity of each of the plurality of detection areas, wherein each detection area includes a plurality of first average image intensities; a first determining unit 404, configured to determine a layer boundary of a part of the target layer corresponding to each detection area according to the plurality of first average image intensities in each detection area; and a second determining unit 405, configured to determine a layer thickness of the target image according to a plurality of layer boundaries corresponding to the plurality of detection areas one to one.
In one possible example, in the preprocessing the target image to obtain a plurality of detection areas, the preprocessing unit 402 is specifically configured to: acquiring a plurality of mark points according to the target image, wherein the mark points are positioned in the target image layer; acquiring a normal corresponding to each marking point in the plurality of marking points; and acquiring a plurality of detection areas according to the normal, wherein each marking point corresponds to one detection area.
In one possible example, in terms of the acquiring a plurality of detection areas according to the normal, the preprocessing unit 402 is specifically configured to: determining that two adjacent marking points of the current marking point are a last marking point and a next marking point respectively; determining an area formed by the normal corresponding to the last mark point and the normal corresponding to the next mark point as a detection area corresponding to the current mark point, wherein the detection area corresponding to the current mark point comprises the current mark point; determining the next mark point as the current mark point; repeating the steps until the last marked point is the current marked point.
In one possible example, in the acquiring a plurality of marker points according to the target image, the preprocessing unit 402 is specifically configured to: acquiring a plurality of initial mark points according to the target image, wherein the initial mark points are positioned in the target image layer; acquiring mark lines according to the plurality of initial mark points; and obtaining a plurality of marking points according to the marking line, wherein the marking points are uniformly distributed on the marking line.
In one possible example, in terms of the obtaining of the marker line from the plurality of initial marker points, the preprocessing unit 402 is specifically configured to: acquiring the mass center of the target layer; obtaining distances of the plurality of initial mark points relative to the mass center respectively; and carrying out smoothing treatment on the initial mark points according to the distance to obtain mark lines.
In one possible example, before determining the layer boundary of the partial target layer corresponding to each detection area according to the plurality of first average image intensities in each detection area, the layer thickness detection apparatus 400 is further configured to: determining the image noise of each detection area; and deleting the detection areas, of which the image noise is higher than preset noise, in the detection areas.
In one possible example, in the acquiring the first average image intensity of each detection area of the plurality of detection areas, the second acquiring unit 403 is specifically configured to: determining detection subareas contained in each detection area, wherein the detection subareas comprise a plurality of detection subareas, and the detection subareas are distributed along the normal direction corresponding to each detection area; and determining the average image intensity corresponding to each detection subarea as a first average image intensity.
In one possible example, in determining the layer boundary of the partial target layer corresponding to each detection area according to the plurality of first average image intensities in each detection area, the first determining unit 404 is specifically configured to: dividing the detection subareas in each detection area according to the positions of the detection subareas to obtain a first set and a second set; acquiring a first layer boundary according to first average image intensities and positions of a plurality of detection subareas contained in the first set; acquiring a second image layer boundary according to the first average image intensity and the positions of the detection subareas contained in the second set; and determining the first layer boundary of the first set and the second layer boundary of the second set corresponding to each detection region as two layer boundaries of the partial target layer corresponding to each detection region.
In one possible example, in terms of the acquiring a first layer boundary according to the first average image intensity and the position of the plurality of detection sub-areas contained in the first set, the first determining unit 404 is specifically configured to: determining a plurality of coordinate points in a coordinate system according to first average image intensities and positions corresponding to a plurality of detection subareas included in the first set, wherein the coordinate system is a coordinate system related to the positions and the image intensities; carrying out smoothing treatment on the coordinate points to obtain a smooth line; acquiring second average image intensity according to the first average image intensity corresponding to the coordinate points; acquiring third average image intensity according to the first average image intensity, of which the intensity value is larger than the second average image intensity, in the first average image intensity corresponding to the coordinate points; acquiring fourth average image intensity according to the first average image intensity, of which the intensity value is not larger than the second average image intensity, in the first average image intensity corresponding to the coordinate points; determining an average value of the third average image intensity and the fourth average image intensity as a target average intensity; and determining a first layer boundary according to the target average intensity and the smooth line.
In one possible example, in the aspect of determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the plurality of detection areas one by one, the second determining unit 405 is specifically configured to: determining the intensity separation rate of each layer boundary in a plurality of layer boundaries corresponding to each detection region one by one; determining a plurality of target layer boundaries from the plurality of layer boundaries according to the intensity separation rate of each layer boundary; and determining the layer thickness of the target image according to the target layer boundaries.
It can be understood that, since the method embodiment and the apparatus embodiment are in different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be adapted to the apparatus embodiment portion synchronously, which is not described herein.
In the case of using an integrated unit, as shown in fig. 5, fig. 5 is a schematic structural diagram of another layer thickness detection device according to an embodiment of the present application. In fig. 5, the layer thickness detection device 500 includes: a processing module 502 and a communication module 501. The processing module 502 is configured to control and manage actions of the layer thickness detection device, for example, control and manage the first acquisition unit 401, the preprocessing unit 402, the second acquisition unit 403, the first determination unit 404, and the second determination unit 405 when executing related commands, and/or other processes for performing the techniques described herein. The communication module 501 is used to support interaction between the layer thickness detection device and other devices. As shown in fig. 5, the layer thickness detection device may further include a storage module 503, where the storage module 503 is configured to store program codes and data of the layer thickness detection device.
The processing module 502 may be a processor or controller, such as a central processing unit (CentralProcessing Unit, CPU), a general purpose processor, a digital signal processor (DigitalSignal Processor, DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like. The communication module 501 may be a transceiver, RF circuitry, or a communication interface, etc. The storage module 503 may be a memory.
All relevant contents of each scenario related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein. Both the layer thickness detection apparatus 400 and the layer thickness detection apparatus 500 may perform the layer thickness detection methods shown in fig. 2a and 3.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with the embodiments of the present application are all or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
The embodiment of the application also provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the methods described in the embodiments of the method, where the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus, and system may be implemented in other manners. For example, the device embodiments described above are merely illustrative; for example, the division of the units is only one logic function division, and other division modes can be adopted in actual implementation; for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RandomAccess Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Although the present invention is disclosed above, the present invention is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the invention.

Claims (12)

1. A method for detecting a thickness of a layer, comprising:
acquiring a target image, wherein the target image comprises a target image layer, the target image is in a circular ring shape, and the target image layer comprises an oxidized block image layer with the thickness to be detected;
preprocessing the target image to obtain a plurality of detection areas, wherein each detection area in the plurality of detection areas corresponds to a part of the target image layer, and each detection area comprises two boundaries of the target image layer;
acquiring a first average image intensity of each detection region of the plurality of detection regions, wherein each detection region comprises a plurality of first average image intensities;
determining a layer boundary of a part of target layers corresponding to each detection region according to a plurality of first average image intensities in each detection region, wherein the layer boundary comprises a first layer boundary and a second layer boundary, the first layer boundary is determined according to the first average image intensity and the position of each detection sub-region in a first set, the second layer boundary is determined according to the first average image intensity and the position of each detection sub-region in a second set, the distance between the detection sub-region included in the first set and an origin is a forward distance, the distance between the detection sub-region included in the second set and the origin is a reverse distance, and the origin is a marking point;
Determining the layer thickness of the target layer according to a plurality of layer boundaries corresponding to the detection areas one by one;
the preprocessing the target image to obtain a plurality of detection areas includes:
acquiring a plurality of mark points according to the target image, wherein the mark points are positioned in the target image layer;
acquiring a normal corresponding to each marking point in the plurality of marking points;
and acquiring a plurality of detection areas according to the normal, wherein each marking point corresponds to one detection area.
2. The method of claim 1, wherein the acquiring a plurality of detection areas from the normal line comprises:
determining that two adjacent marking points of the current marking point are a last marking point and a next marking point respectively;
determining an area formed by the normal corresponding to the last mark point and the normal corresponding to the next mark point as a detection area corresponding to the current mark point, wherein the detection area corresponding to the current mark point comprises the current mark point;
determining the next mark point as the current mark point;
repeating the steps until the last marked point is the current marked point.
3. The method of claim 1, wherein the acquiring a plurality of marker points from the target image comprises:
Acquiring a plurality of initial mark points according to the target image, wherein the initial mark points are positioned in the target image layer;
acquiring mark lines according to the plurality of initial mark points;
and obtaining a plurality of marking points according to the marking line, wherein the marking points are uniformly distributed on the marking line.
4. A method according to claim 3, wherein said obtaining a marker line from said plurality of initial marker points comprises:
acquiring the mass center of the target layer;
obtaining distances of the plurality of initial mark points relative to the mass center respectively;
and carrying out smoothing treatment on the initial mark points according to the distance to obtain mark lines.
5. The method of any one of claims 1-4, wherein prior to determining the layer boundaries of the portion of the target layer corresponding to each detection region from the plurality of first average image intensities in each detection region, the method further comprises:
determining the image noise of each detection area;
and deleting the detection areas, of which the image noise is higher than preset noise, in the detection areas.
6. The method of claim 1, wherein the acquiring a first average image intensity for each of the plurality of detection regions comprises:
Determining detection subareas contained in each detection area, wherein the detection subareas comprise a plurality of detection subareas, and the detection subareas are distributed along the normal direction corresponding to each detection area;
and determining the average image intensity corresponding to each detection subarea as a first average image intensity.
7. The method of claim 6, wherein determining the layer boundaries of the portion of the target layer corresponding to each detection region from the plurality of first average image intensities in each detection region comprises:
dividing the detection subareas in each detection area according to the positions of the detection subareas to obtain a first set and a second set;
acquiring a first layer boundary according to first average image intensities and positions of a plurality of detection subareas contained in the first set;
acquiring a second image layer boundary according to the first average image intensity and the positions of the detection subareas contained in the second set;
and determining the first layer boundary of the first set and the second layer boundary of the second set corresponding to each detection region as two layer boundaries of the partial target layer corresponding to each detection region.
8. The method of claim 7, wherein the acquiring a first layer boundary from the first average image intensity and the position of the plurality of detection sub-regions included in the first set comprises:
determining a plurality of coordinate points in a coordinate system according to first average image intensities and positions corresponding to a plurality of detection subareas included in the first set, wherein the coordinate system is a coordinate system related to the positions and the image intensities;
carrying out smoothing treatment on the coordinate points to obtain a smooth line;
acquiring second average image intensity according to the first average image intensity corresponding to the coordinate points;
acquiring third average image intensity according to the first average image intensity, of which the intensity value is larger than the second average image intensity, in the first average image intensity corresponding to the coordinate points;
acquiring fourth average image intensity according to the first average image intensity, of which the intensity value is not larger than the second average image intensity, in the first average image intensity corresponding to the coordinate points;
determining an average value of the third average image intensity and the fourth average image intensity as a target average intensity;
and determining a first layer boundary according to the target average intensity and the smooth line.
9. The method of claim 1, wherein determining the layer thickness of the target layer from a plurality of layer boundaries in one-to-one correspondence with the plurality of detection regions comprises:
determining the intensity separation rate of each layer boundary in a plurality of layer boundaries corresponding to each detection region one by one;
determining a plurality of target layer boundaries from the plurality of layer boundaries according to the intensity separation rate of each layer boundary;
and determining the layer thickness of the target image according to the target layer boundaries.
10. A layer thickness detection device, comprising:
the first acquisition unit is used for acquiring a target image, wherein the target image comprises a target image layer, the target image is in a circular ring shape, and the target image layer comprises an oxidized block image layer with the thickness to be detected;
the processing unit is used for preprocessing the target image to obtain a plurality of detection areas, each detection area in the plurality of detection areas corresponds to a part of the target image layer, and each detection area comprises two boundaries of the target image layer;
a second acquisition unit configured to acquire a first average image intensity of each of the plurality of detection areas, wherein each detection area includes a plurality of first average image intensities;
The first determining unit is configured to determine, according to a plurality of first average image intensities in each detection area, a layer boundary of a part of target layers corresponding to the detection areas, where the layer boundary includes a first layer boundary and a second layer boundary, the first layer boundary is determined according to a first average image intensity and a position of each detection sub-area in a first set, the second layer boundary is determined according to a first average image intensity and a position of each detection sub-area in a second set, a distance between a detection sub-area included in the first set and an origin is a forward distance, a distance between a detection sub-area included in the second set and the origin is a reverse distance, and the origin is a mark point;
a second determining unit, configured to determine a layer thickness of the target layer according to a plurality of layer boundaries corresponding to the plurality of detection areas one to one;
in the aspect of preprocessing the target image to obtain a plurality of detection areas, the processing unit is further configured to: acquiring a plurality of mark points according to the target image, wherein the mark points are positioned in the target image layer; acquiring a normal corresponding to each marking point in the plurality of marking points; and acquiring a plurality of detection areas according to the normal, wherein each marking point corresponds to one detection area.
11. An electronic device comprising a processor, a memory, and a communication interface, the processor and the communication interface being communicatively coupled to the memory, respectively, the memory storing one or more programs and the one or more programs being executable by the processor, the one or more programs including instructions for performing the steps in the method of any of claims 1-9.
12. A computer readable storage medium storing a computer program for electronic data exchange, wherein the computer program is operable to cause a computer to perform the method of any one of claims 1-9.
CN202110581681.5A 2021-05-26 2021-05-26 Method and device for detecting thickness of layer, electronic equipment and readable storage medium Active CN113256700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110581681.5A CN113256700B (en) 2021-05-26 2021-05-26 Method and device for detecting thickness of layer, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110581681.5A CN113256700B (en) 2021-05-26 2021-05-26 Method and device for detecting thickness of layer, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113256700A CN113256700A (en) 2021-08-13
CN113256700B true CN113256700B (en) 2023-05-23

Family

ID=77184829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110581681.5A Active CN113256700B (en) 2021-05-26 2021-05-26 Method and device for detecting thickness of layer, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113256700B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818991A (en) * 2021-02-18 2021-05-18 长江存储科技有限责任公司 Image processing method, image processing apparatus, electronic device, and readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6644075B2 (en) * 2015-08-19 2020-02-12 興和株式会社 Image processing apparatus, image processing method, and image processing program
CN107577979B (en) * 2017-07-26 2020-07-03 中科创达软件股份有限公司 Method and device for quickly identifying DataMatrix type two-dimensional code and electronic equipment
CN110225336B (en) * 2019-06-21 2022-08-26 京东方科技集团股份有限公司 Method and device for evaluating image acquisition precision, electronic equipment and readable medium
CN111681256B (en) * 2020-05-07 2023-08-18 浙江大华技术股份有限公司 Image edge detection method, image edge detection device, computer equipment and readable storage medium
CN112150491B (en) * 2020-09-30 2023-08-18 北京小狗吸尘器集团股份有限公司 Image detection method, device, electronic equipment and computer readable medium
CN112150490B (en) * 2020-09-30 2024-02-02 北京小狗吸尘器集团股份有限公司 Image detection method, device, electronic equipment and computer readable medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818991A (en) * 2021-02-18 2021-05-18 长江存储科技有限责任公司 Image processing method, image processing apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN113256700A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN109359602A (en) Method for detecting lane lines and device
WO2022089018A1 (en) Method and apparatus for slicing three-dimensional vector data of three-dimensional vector map, and electronic device
US20160300328A1 (en) Method and apparatus for implementing image denoising
CN110892760B (en) Positioning terminal equipment based on deep learning
US8976183B2 (en) Method and system for approximating curve, and graphic display control method and apparatus
EP3786883A1 (en) Image storage method and apparatus, and electronic device and storage medium
KR20160045779A (en) Method and device for adsorbing straight line/line segment, and method and device for constructing polygon
CN113256700B (en) Method and device for detecting thickness of layer, electronic equipment and readable storage medium
CN113240724A (en) Thickness detection method and related product
EP3321882B1 (en) Matching cost computation method and device
CN112818991B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
US9978152B2 (en) Method and system for wafer alignment
CN113658196A (en) Method and device for detecting ship in infrared image, electronic equipment and medium
CN110264430B (en) Video beautifying method and device and electronic equipment
US20120176395A1 (en) System, method and computer program product for color processing of point-of-interest color
WO2019165676A1 (en) Element position correcting method, device, computer device, and storage medium
CN113824939A (en) Projection image adjusting method and device, projection equipment and storage medium
CN111428707A (en) Method and device for identifying pattern identification code, storage medium and electronic equipment
CN113932793A (en) Three-dimensional coordinate positioning method and device, electronic equipment and storage medium
JP3616992B2 (en) Graph display system and data value display processing program thereof
CN112950748B (en) Building drawing splicing method and related device
CN117974839B (en) Drawing method of wafer map and related device
CN117457550B (en) Wafer alignment method and related device
CN117592168A (en) Cornice generation method, cornice generation device, cornice generation equipment and storage medium
CN110189279B (en) Model training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant