CN113240724A - Thickness detection method and related product - Google Patents

Thickness detection method and related product Download PDF

Info

Publication number
CN113240724A
CN113240724A CN202110527783.9A CN202110527783A CN113240724A CN 113240724 A CN113240724 A CN 113240724A CN 202110527783 A CN202110527783 A CN 202110527783A CN 113240724 A CN113240724 A CN 113240724A
Authority
CN
China
Prior art keywords
target
image
intensity
average intensity
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110527783.9A
Other languages
Chinese (zh)
Other versions
CN113240724B (en
Inventor
法提·奥尔梅兹
郑先意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze Memory Technologies Co Ltd
Original Assignee
Yangtze Memory Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze Memory Technologies Co Ltd filed Critical Yangtze Memory Technologies Co Ltd
Priority to CN202110527783.9A priority Critical patent/CN113240724B/en
Publication of CN113240724A publication Critical patent/CN113240724A/en
Application granted granted Critical
Publication of CN113240724B publication Critical patent/CN113240724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/06Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application discloses a thickness detection method and a related product, wherein the method comprises the following steps: acquiring a target image, wherein the target image comprises a first reference point used for indicating a target layer position of a target object, and the target layer comprises a first boundary and a second boundary; acquiring a first target image area corresponding to the first boundary and a second target image area corresponding to the second boundary from the first image area according to the image intensity of the first image area in the target image, wherein the first image area comprises the first reference point; determining the thickness of the target layer according to the distance between the first target image area and the second target image area. The embodiment of the application is beneficial to improving the thickness detection efficiency and the accuracy of the detection result.

Description

Thickness detection method and related product
Technical Field
The application relates to the technical field of image processing, in particular to a thickness detection method and a related product.
Background
The high insulating material, alumina layer (HK-ALO), is a layer between the bulk oxide and the outer tungsten walls in an ONOP channel pore structure (ONOP channel pore structure comprising, from inside to outside, polysilicon channels (P), tunnel oxide (O), storage nitride (N) and bulk oxide (O)), HK-ALO being very thin, typically about 2.5nm-3nm thick. In practice, however, there is a need to measure the thickness of the HK-ALO for various reasons.
Disclosure of Invention
The embodiment of the application provides a thickness detection method and a related product, so that the thickness detection efficiency and the accuracy of a detection result are improved.
In a first aspect, an embodiment of the present application provides a thickness detection method, where the method includes:
acquiring a target image, wherein the target image comprises a first reference point used for indicating a target layer position of a target object, and the target layer comprises a first boundary and a second boundary;
acquiring a first target image area corresponding to the first boundary and a second target image area corresponding to the second boundary from the first image area according to the image intensity of the first image area in the target image, wherein the first image area comprises the first reference point;
determining the thickness of the target layer according to the distance between the first target image area and the second target image area.
In a second aspect, an embodiment of the present application provides a thickness detection apparatus, including: a first acquisition unit, a second acquisition unit, and a determination unit, wherein,
the first acquisition unit is used for acquiring a target image, the target image comprises a first reference point used for indicating the position of a target layer, and the target layer comprises a first boundary and a second boundary;
the second acquiring unit is configured to acquire a first target image region corresponding to the first boundary and a second target image region corresponding to the second boundary from the first image region according to an image intensity of the first image region in the target image, where the first image region includes the first reference point;
the determining unit is configured to determine the thickness of the target layer according to a distance between the first target image area and the second target image area.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer storage medium storing a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the first aspect of the present embodiment.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
In an embodiment of the present application, an electronic device first obtains a target image, where the target image includes a first reference point used for indicating a target layer position of a target object, then obtains, from a first image region in the target image, a first target image region corresponding to a first boundary of the target layer and a second target image region corresponding to a second boundary of the target layer, where the first image region includes the first reference point, according to an image intensity of the first image region, and then determines, by the electronic device, a thickness of the target layer according to a distance between the first target image region and the second target image region. Therefore, the electronic equipment can directly process the target image and determine the thickness of the target layer according to the image intensity of the first image area where the first reference point indicating the position of the target layer is located, and the thickness detection efficiency and the detection result accuracy can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a diagram illustrating an exemplary architecture of a thickness detection system according to an embodiment of the present disclosure;
fig. 1B is a diagram illustrating an exemplary composition of an electronic device according to an embodiment of the present disclosure;
fig. 2A is a schematic flowchart of a thickness detection method according to an embodiment of the present disclosure;
FIG. 2B is a cross-sectional view of a target object provided by an embodiment of the present application;
fig. 2C is an exemplary diagram of a first image region according to an embodiment of the present application;
FIG. 2D is an exemplary diagram of a user mark point and a reference point in a target image according to an embodiment of the disclosure;
FIG. 2E is an exemplary graph of an average intensity curve provided by an embodiment of the present application;
fig. 2F is a diagram of another example of a first image area provided in an embodiment of the present application;
fig. 3A is a block diagram illustrating functional units of a thickness detection apparatus according to an embodiment of the present disclosure;
fig. 3B is a block diagram of functional units of another thickness detection apparatus provided in the embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
At present, when the HK-ALO thickness is measured, the manual measurement method of engineers is generally adopted, time and labor are consumed, and the manual judgment result depends on the experience and habit of each engineer, so that the measurement performed through manual judgment has inherent deviation, and the accuracy of the detection result is influenced.
In view of the above problems, embodiments of the present application provide a thickness detection method and a related product, and the following describes embodiments of the present application in detail with reference to the accompanying drawings.
Referring to fig. 1A, fig. 1A is a diagram illustrating an architecture of a thickness detection system according to an embodiment of the present disclosure, where the thickness detection system includes an electronic device 100 and other devices 200, where the electronic device 100 and the other devices 200 may be communicatively connected, the electronic device 100 may specifically be a server device, and may perform the thickness detection method described in the present disclosure, and the other devices 200 may be other electronic devices besides the electronic device 100, such as a terminal or a server device, or may be an electron microscope device.
That is, the electronic apparatus 100 may acquire the target image from another terminal or server apparatus (the target image may be acquired from the microscope by the other terminal or server apparatus), or the electronic apparatus 100 may acquire the target image directly from the electron microscope apparatus and then perform the thickness detection method based on the target image.
Referring to fig. 1B, a composition structure of an electronic device (which may be the electronic device 100 described in this embodiment) provided in this embodiment may be as shown in fig. 1B, where the electronic device includes a processor 110, a memory 120, a communication interface 130, and one or more programs 121, where the one or more programs 121 are stored in the memory 120 and configured to be executed by the processor 110, and the one or more programs 121 include instructions for executing any step in the following method embodiments.
The communication interface 130 is used for supporting communication between the electronic device and other devices. The Processor 110 may be, for example, a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, units, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others.
The memory 120 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
In a specific implementation, the processor 110 is configured to perform any one of the steps performed by the electronic device in the method embodiments described below, and when performing data transmission such as sending, optionally invokes the communication interface 130 to complete the corresponding operation.
It should be noted that the structural schematic diagram of the electronic device is merely an example, and more or fewer devices may be specifically included, which is not limited herein.
Referring to fig. 2A, fig. 2A is a schematic flowchart of a thickness detection method according to an embodiment of the present disclosure, where the method may be applied to the electronic device shown in fig. 1A, and as shown in fig. 2A, the thickness detection method includes the following steps:
s201, the electronic equipment acquires a target image.
Wherein the target image comprises a first reference point for indicating a target layer position of a target object, the target layer comprising a first boundary and a second boundary.
In a specific implementation, the target object may include a plurality of adjacent layers, for example, the target object may include an ONOP channel pore structure, the exterior of the ONOP channel may be a tungsten wall structure, the target layer may be an HK-ALO layer between the ONOP and the outer tungsten wall, the HK-ALO layer further adjacent to a bulk oxide in the channel pore structure, then the first boundary may be a boundary of the HK-ALO layer and the tungsten wall structure/a boundary of the HK-ALO layer and the bulk oxide, and correspondingly, the second boundary may be a boundary of the HK-ALO layer and the bulk oxide/a boundary of the HK-ALO layer and the tungsten wall structure.
For example, if the first boundary is the boundary of the HK-ALO layer and the tungsten wall, the second boundary is the boundary of the HK-ALO layer and the bulk oxide, and the first and second boundaries are coaxial rings.
The target image may comprise a cross-sectional view of the target object, the plane of the cross-sectional view being perpendicular to the direction of the hole axis of the ONOP channel hole structure, wherein the cross-sectional view may be obtained from a microscope image acquisition, and may be a grayscale image. Referring specifically to fig. 2B, fig. 2B is a cross-sectional view of a target object according to an embodiment of the present disclosure, and fig. 2B (a) shows a cross-sectional view of an ONOP, which includes an HK-ALO layer and tungsten walls and a bulk oxide layer adjacent thereto. In addition, a first reference point for indicating the position of the HK-ALO layer in the sectional view can be included in the target image.
Specifically, the positional relationship of the bulk oxide layer, the HK-ALO layer, and the tungsten wall may be specifically as shown in fig. 2b (b), in which the image area indicated by 1 is the tungsten wall, which is shown by the hatched portion; the image area indicated by 2 is a block oxide, shown in the shape of a circular ring; the image area indicated by 3 is the HK-ALO layer, which is shown by the circular ring shape in the figure.
S202, the electronic equipment acquires a first target image area corresponding to the first boundary and a second target image area corresponding to the second boundary from the first image area according to the image intensity of the first image area in the target image.
Wherein the first image region includes the first reference point.
Specifically, the first image area may be a parallelogram (e.g., rectangular) image area including third and fourth boundaries parallel to each other, and fifth and sixth boundaries parallel to each other; and each of the third boundary and the fourth boundary passes through the image area corresponding to the first boundary in the target image, the image area corresponding to the target layer and the image area corresponding to the second boundary.
Because the image intensities of different layers in the target object are different, the image intensities of the positions of the boundaries (the first boundary and the second boundary) of the target layer and other structures in the target object are changed, and the region with obvious image intensity difference in the first image is determined according to the image intensity in the first image region, so that the first target image region and the second target image region can be obtained and can respectively correspond to the positions of the two boundaries of the target layer.
S203, the electronic equipment determines the thickness of the target layer according to the distance between the first target image area and the second target image area.
In a specific implementation, the first target image area may be a first line segment in the first image area, the second target image area may be a second line segment in the first image area, and a distance between the first target image area and the second target image area may be determined according to a first reference point, for example, a first intersection point of a distance reference line passing through the first reference point and the first line segment is obtained, a second intersection point of the distance reference line and the second line segment is obtained, and a distance between the first intersection point and the second intersection point is calculated, which may be used as the thickness of the target layer. Wherein the distance reference line may be the second characteristic line.
The first intersection point and the second intersection point may be pixel points in the target image, and after the first lower intersection point and the second intersection point are determined, a position difference (in units of image pixels) between the two intersection points may be determined, and then the thickness expressed in units of pixels may be converted into a thickness value in units of nanometers.
In a specific implementation, the target image may include a plurality of reference points, the first reference point may be any one of the plurality of reference points, and the electronic device may repeat the above thickness detection method step for the first reference point for each of the plurality of reference points, so as to obtain a plurality of thickness measurement values, and then may average the plurality of thickness measurement values, so as to obtain the target thickness value of the target layer.
In an embodiment of the present application, an electronic device first obtains a target image, where the target image includes a first reference point used for indicating a target layer position of a target object, then obtains, from a first image region in the target image, a first target image region corresponding to a first boundary of the target layer and a second target image region corresponding to a second boundary of the target layer, where the first image region includes the first reference point, according to an image intensity of the first image region, and then determines, by the electronic device, a thickness of the target layer according to a distance between the first target image region and the second target image region. Therefore, the electronic equipment can directly process the target image and determine the thickness of the target layer according to the image intensity of the first image area where the first reference point indicating the position of the target layer is located, and the thickness detection efficiency and the detection result accuracy can be improved.
In one possible example, the first image region includes N sub-image regions distributed along a first feature line passing through the first boundary, the target layer, and the second boundary, N being an integer not less than 2; the acquiring a first target image area and a second target image area from a first image area according to the image intensity of the first image area in the target image includes:
acquiring the average image intensity of each sub-image area in the N sub-image areas to obtain N average image intensities; obtaining an average intensity curve according to the N average image intensities; acquiring a first target intensity from the average intensity curve according to a local extreme value which is closest to a first end of the average intensity curve in the average intensity curve; acquiring a second target intensity from the average intensity curve according to the maximum value in the average intensity curve; acquiring the first target image area from the first image area according to the first target intensity; and acquiring the second target image area from the first image area according to the second target intensity.
Wherein the first characteristic line may be parallel to the third boundary and the fourth boundary of the first image region.
In a specific implementation, the N sub-image regions are distributed along the first characteristic line, and then the corresponding average intensity curve is obtained according to the image intensities of the N sub-image regions, so that the change of the image intensity between the first boundary and the second boundary in the first image region can be reflected, and further the position where the image intensity changes in the first image region can be more conveniently and clearly determined through the average intensity curve, and the first target image region and the second target image region corresponding to different boundaries of the target layer can be determined.
For example, the target object may be an ONOP channel pore structure, and the target layer may be an HK-ALO layer; the first target intensity may be a local extremum in the average intensity curve that is closest to a first end of the average intensity curve, and the second target intensity may be a maximum value in the average intensity curve. Wherein the first end of the average intensity curve may be: one end corresponding to the average image intensity of the target sub-image area in the N sub-image areas, the target sub-image area being: the last sub-image area is distributed along a first direction of the first characteristic line, which first direction is directed from the second border to the first border. That is, when determining the first boundary of the target layer, the image region corresponding to the last local extremum in the end of the average intensity curve pointing to the first boundary is determined as the position of the first boundary. And determining the image area corresponding to the maximum value in the average intensity curve as the position of the second boundary.
In a specific implementation, the N sub-image regions may be N line segments, and the average image intensity of each sub-image region is the average image intensity of each line segment in the N line segments. For the case where the target layer is the HK-ALO layer in the ONOP channel hole structure, N sub-image regions (e.g., N line segments) may be distributed along a normal line direction of the contour line of the target layer at a first reference point passing through the first reference point, and the N line segments may be perpendicular to the normal line. When the target image includes at least three reference points, the length of each line segment in the N line segments may be equal to the length of a connection line between a second reference point and a third reference point, and the second reference point and the third reference point are points adjacent to the first reference point in the at least three reference points.
For example, please refer to fig. 2C and fig. 2E, which take the first image region surrounded by four solid lines in the HK-ALO layer as the target layer and the portion (d) of fig. 2C as an example.
The N sub-image regions may be N line segments (only the dotted line a, the dotted line b, and the dotted line c are exemplarily shown in fig. 2c (d)), and the lengths of the N line segments are all equal to the distance between the points a and b shown in fig. 2c (b). After the average image intensity after each line segment is calculated to obtain N average intensities, an average intensity curve as shown in fig. 2E may be obtained according to the N average intensities, where the average intensity curve may be obtained by smoothing the N average intensities, a vertical axis in fig. 2E represents the image intensity, and a horizontal axis represents the positions of the N sub-image regions of the first image region in the target image, that is, an abscissa of each point on the average intensity curve represents the position of the image region corresponding to the point in the target image, and a vertical coordinate represents the average image intensity of the image region corresponding to the point. The maximum, local extreme, first target intensity, and second target intensity in the average intensity curve may all be ordinates of a point in the average intensity curve.
The first boundary may be the boundary of the HK-ALO layer with the tungsten wall, and the local extremum may be the last local maximum in the positive direction of the abscissa of the average intensity curve, as indicated by point a in the average intensity curve of fig. 2E (the intersection of dashed line i and the average intensity curve in fig. 2E); the second boundary may be the boundary of HK-ALO with the bulk oxide, and the maximum may be the maximum value, as indicated by point B in the average intensity curve of fig. 2E (the intersection of dashed line ii with the average intensity curve in fig. 2E).
In this example, the first coordinate system, i.e. the origin of the first coordinate system in fig. 2c (c), is established with the first reference point as the origin, the solid line x as the x-axis, and the solid line y as the y-axis, and "0" on the horizontal axis in fig. 2E may indicate the position of the first reference point, and the unit of the y-axis in the first coordinate system may be the same as the unit of the horizontal axis in fig. 2E. After the point a and the point B corresponding to the first target intensity and the second target intensity in the average intensity curve are determined, the abscissa of the point a and the abscissa of the point B respectively indicate the positions of the first target image area and the second target image area, for example, if the abscissa of the point a in fig. 2E is 18, the coordinate value of the intersection point of the corresponding first target image area and the y axis in the first coordinate system on the y axis is 18, that is, the intersection point of the first target image area and the y axis is in the positive direction of the y axis, and the distance between the intersection point and the first reference point is 18.
As can be seen, in this example, the electronic device obtains an average intensity curve according to average image intensities of N sub-image regions distributed along the first characteristic line in the first image, and then obtains a first target intensity and a second target intensity from the average intensity curve according to a maximum value in the average intensity curve and a local extreme value closest to the first end of the average intensity curve, and further obtains a first target image region and a second target image region corresponding to the two target intensities from the first image region, where the average intensity curve is determined according to intensities of different sub-image regions in the first image region, and determines the first target image region and the second target image region according to the average intensity curve, which is beneficial to improving accuracy of determining the target image regions.
In one possible example, the obtaining of the first target intensity from the average intensity curve is performed according to a local extreme value of the average intensity curve closest to a first end of the average intensity curve; acquiring a second target intensity from the average intensity curve according to the maximum value in the average intensity curve, wherein the acquiring comprises the following steps:
obtaining a first reference intensity and a second reference intensity from the average intensity curve, wherein the first reference intensity comprises a local extreme value nearest to a first end of the average intensity curve in the average intensity curve, and the second reference intensity comprises a maximum value in the average intensity curve; acquiring the reference intensity of the target layer according to the first reference intensity, the second reference intensity and the average intensity curve; determining the first target strength according to a preset first parameter, the reference strength of the target layer and the first reference strength; and determining the second target strength according to a preset second parameter, the reference strength of the target layer and the second reference strength.
The values of the first parameter and the second parameter may be set by a user according to actual needs, or may be set by the electronic device, and specifically, may be determined based on at least one of the following: parameters of the equipment used to acquire the target image, such as the microscope equipment itself, the pixel size and noise level of the target image, and the deposition quality of the various layer structures in the target object.
In a specific implementation, if the target layer is the HK-ALO layer, the point a in fig. 2E may be the first reference intensity and the point B may be the second reference intensity.
Since the first reference intensity and the second reference intensity may respectively correspond to different boundaries of the target layer, the reference intensity of the target layer may be an average value of average intensities between the first reference intensity and the second reference intensity in the average intensity curve, that is, an average image intensity of an image area in which the target layer is located in the first image area.
In specific implementation, for the purpose of quantifying and eliminating the algorithm deviation, the first reference strength and the second reference strength, which are acquired by the electronic device and correspond to the boundary of the target layer, may be adjusted according to the first parameter and the second parameter to obtain the first target strength and the second target strength, so that the adjustment of the thickness of the target layer determined by the electronic device is realized. The values of the first parameter and the second parameter can be set and adjusted by a user, so that the target layer thickness determined according to the first parameter and the second parameter meets the expectation of the user, and the subsequent electronic equipment can process the reference strength corresponding to other reference points according to the first parameter and the second parameter to obtain the target layer thickness which is more likely to meet the expectation of the user; the value of the first parameter may be equal to the second parameter.
Wherein the parameter values of the first parameter and the second parameter may be values close to 1, for example, values not less than 0.90 and not more than 1.00.
Specifically, the first target average intensity may be determined by the following formula:
A1=│1-a1│×AI+a1×b1;
wherein A is1I.e., the first target average intensity, a1 is the first parameter, AI is the reference intensity of the target layer, and b1 is the first reference intensity.
The second target average intensity may be determined by the following equation:
A2=│1-a2│×AI+a2×b2;
wherein A is2I.e., the second target average intensity, a2 is the second parameter, AI is the reference intensity of the target layer, and b2 is the second reference intensity. In a specific implementation, the values of a1 and a2 may be equal.
For example, if the second reference strength is 1800, the reference strength of the target layer is 1600, and the second parameter is 0.9, the second reference strength should be 1780.
If the first parameter is equal to 1, the first reference intensity is the first target average intensity; if the second parameter is equal to 1, the second reference intensity is the second target average intensity.
It can be seen that, in this example, the electronic device first obtains the first reference intensity and the second reference intensity from the average intensity curve, and then determines the first target intensity and the second target intensity based on the first reference intensity and the second reference intensity, respectively, in combination with the first parameter and the second parameter, and the reference intensity of the target layer, which is beneficial to improving the flexibility of determining the first target intensity and the second target intensity.
In one possible example, in a case where a plurality of the first target intensities are included in the average intensity curve, the plurality of first target intensities correspond one-to-one to a plurality of first candidate image regions in the first image region; the acquiring the first target image area from the first image area according to the first target intensity includes: acquiring the relative position relation between the first target image area and a first reference image area corresponding to the first reference strength according to the parameter value of the first parameter; acquiring the first target image area from the plurality of first candidate image areas according to the relative position relation between the first target image area and the first reference image area;
in a case where a plurality of the second target intensities are included in the average intensity curve, the plurality of second target intensities correspond one-to-one to a plurality of second candidate image regions in the first image region; the acquiring the second target image area from the first image area according to the second target intensity includes: acquiring the relative position relation between the second target image area and a second reference image area corresponding to the second reference intensity according to the parameter value of the second parameter; and acquiring the second target image area from the plurality of second candidate image areas according to the relative position relation between the second target image area and the second reference image area.
In a specific implementation, it is considered that two or more points with the same abscissa and different ordinate may exist in the average intensity curve, that is, sub-image regions with different positions and the same average intensity value exist in the first image region. Therefore, when the first target intensity corresponds to the plurality of first candidate image regions, the first target image region may be determined from the plurality of first candidate image regions according to the first parameter.
Specifically, an auxiliary image area in the first image area, which is located between the first reference image area and the second reference image area and has the first reference image area and the second reference image area as a boundary, is used as a reference standard of the relative position relationship.
When the parameter value of the first parameter is within the first parameter value range, the first target image area is within the auxiliary image area; when the parameter value of the first parameter is in the second parameter value range, the first target image domain is outside the auxiliary image domain. When the parameter value of the second parameter is within the third parameter value range, the second target image area is within the auxiliary image area; when the parameter value of the second parameter is within the fourth parameter value range, the second target image area is outside the auxiliary image area.
In a specific implementation, the first parameter value range and the third parameter value range may be less than 1, and the second parameter value range and the fourth parameter value range may be greater than 1.
For example, referring to fig. 2F, fig. 2F is an exemplary diagram of another first image region provided in the present embodiment, in which four solid lines at the outermost periphery form the first image region, two dotted lines in the first image region are respectively a first reference image region and a second reference image region, a white portion in the first image region is an auxiliary image region, and a shadow portion in the first image region is an auxiliary image region. If the second parameter is within the first parameter value range, the second target image area is within the white portion in fig. 2F, and if the second parameter is within the second parameter value range, the second target image area is within the shaded portion in fig. 2F.
For example, referring to FIG. 2E, taking the target layer as HK-ALO layer as an example, point A in FIG. 2E corresponds to a first reference intensity, and point B corresponds to a second reference intensity. If the second reference intensity is 1800 and the reference intensity of the target layer is 1600, and the second parameter is 0.90, the second target intensity is 1780, and there are two points with an intensity value of 1780 in the average intensity curve, which are respectively located on the left and right sides of the point B in the average intensity curve (that is, the two second candidate image regions are respectively located on the outer side and the inner side of the auxiliary image region), at this time, the second parameter is 0.90, and the second parameter is smaller than 1, that is, the second parameter is within the first preset range, the first target image region should be on the outer side of the auxiliary image, and it should be the point with an ordinate on the left side of the point B in the average intensity curve being 1780 (that is, the intersection point of the average intensity curve and the dotted line iii in fig. 2E). If the second parameter is 1.1, the second target average intensity value is still 1780, but since the second parameter is greater than 1, the first target image region should be outside the auxiliary image, corresponding to the point on the average intensity curve with the abscissa at the right side of point B and the ordinate at 1780.
In this example, when the target intensity corresponds to the plurality of candidate image regions, the electronic device obtains the target image region from the plurality of candidate image regions according to the parameter corresponding to the target intensity, which is beneficial to improving the accuracy of determining the target image region.
In one possible example, after obtaining the first target intensity and the second target intensity from the average intensity curve, the method further comprises: acquiring a first detection area from the average intensity curve according to the first target intensity; if the first detection area does not match a preset first condition, discarding the first reference point, where the first condition includes: the average intensity curve within the first detection region monotonically decreases.
In a specific implementation, the accuracy of the determined target layer boundary, i.e., the first target strength or the second target strength, is considered to be possibly influenced by image noise, and therefore, the result can be verified. Specifically, since there is a difference in the image intensities of different layers of the target object, which is a relatively stable feature (for example, the image intensity of the HK-ALO layer whose intensity is lower than that of the HK-ALO layer is lower than that of the bulk oxide layer, and the image thereof is grayer than that of the bulk oxide layer), the first target intensity can be verified based on the change in the intensity curve around the determined boundary, i.e., the first target intensity.
In a specific implementation, the first detection region may be a curve segment from a point of the average intensity curve where the first target intensity is located to the first end of the average intensity curve.
In a particular implementation, the target layer may be a HK-ALO layer.
For example, referring to fig. 2E, the target layer is the HK-ALO layer, the first target intensity is the last local maximum (point a) near the right side of the target average intensity curve, and the first region to be detected is the region right of the point a in the average intensity curve, which decreases monotonically to match the first condition. The electronic device does not discard the first reference point, i.e. the electronic device retains the thickness value of the target layer determined from the first reference point. If not, the electronic device discards the first reference point, does not perform subsequent processing on the first reference point, and processes other reference points.
It can be seen that, in this example, the electronic device obtains the first detection area from the average intensity curve according to the first target intensity, discards the first reference point if the first detection area is not matched with the preset first condition, verifies the first target intensity obtained according to the first reference point, and discards the first reference point when the verification fails, which is beneficial to improving accuracy and efficiency of thickness detection.
In one possible example, after obtaining the first target intensity and the second target intensity from the average intensity curve, the method further comprises: acquiring a second detection area and a third detection area from the average intensity curve according to the first target intensity and the second target intensity; and if the difference value between the average intensity of the second detection area and the average intensity of the third detection area is smaller than a first preset threshold value, discarding the first reference point.
Specifically, the second detection region may be a curve segment from a point of the average intensity curve where the second target average intensity is located to the second end of the average intensity curve, and the third detection region may be a curve segment from a point of the average intensity curve where the first target average intensity is located and a point of the average intensity curve where the second target average intensity is located as end points.
In a particular implementation, the target layer may be a HK-ALO layer.
Referring to fig. 2E, the first target intensity may be a point a in the average intensity curve, and the second target average intensity may be a maximum value (i.e., a point B) in the average intensity curve. The second region to be detected may be a region left of the point B in the average intensity curve (the region where the first structure adjacent to the target layer is located determined based on the second target intensity, and when the target layer is HK-ALO, the first structure may be a bulk oxide layer), and the third region to be detected may be a region between the point a and the point B in the average intensity curve (the region where the target layer is located determined based on the first target intensity and the second target intensity). After the average intensity values of the two regions to be detected are respectively calculated, the difference value of the two average intensity values can be calculated. Since the second target intensity corresponds to the adjacent boundary of the target layer and the adjacent structure, i.e., the second target image region, the second target intensity can be verified based on the difference between the target intensity and the image intensity of the structure adjacent thereto.
In this example, the electronic device obtains the second detection area and the third detection area from the average intensity curve according to the first target intensity and the second target intensity, and if a difference between the average intensity of the second detection area and the average intensity of the third detection area is smaller than a first preset threshold, the first reference point is discarded, which is beneficial to improving accuracy and efficiency of thickness detection.
In one possible example, after obtaining the first target intensity and the second target intensity from the average intensity curve, the method further comprises: acquiring a fourth detection area from the average intensity curve according to the first target intensity and the second target intensity; if the difference between the average intensity of the fourth detection area and the first target intensity is smaller than a second preset threshold, and/or the difference between the average intensity of the fourth detection area and the second target intensity is smaller than a third preset threshold; the first reference point is discarded.
Specifically, the fourth detection area may be a curve segment in the average intensity curve, where a point where the first target average intensity is located and a point where the second target average intensity is located are end points.
In a particular implementation, the target layer may be a HK-ALO layer.
Referring to fig. 2E, the first target intensity may be a point a in the average intensity curve, and the second target average intensity may be a maximum value (i.e., a point B) in the average intensity curve. The fourth region to be detected may be a region between the points a and B in the average intensity curve (a region where the target layer is located determined based on the first target intensity and the second target intensity). That is, the determined target layer boundary (i.e., the first target intensity and the second target intensity) is verified by the relationship between the determined target layer boundary region image intensity and the target layer region image intensity.
In this example, the electronic device obtains the fourth detection area from the average intensity curve according to the first target intensity and the second target intensity, and if a difference between the average intensity of the fourth detection area and the first target intensity is smaller than a second preset threshold, and/or a difference between the average intensity of the fourth detection area and the second target intensity is smaller than a third preset threshold, the accuracy and the efficiency of thickness detection are improved.
In one possible example, the target image includes at least three reference points for indicating a target layer location, the at least three reference points including: the first reference point, a second reference point and a third reference point adjacent to the first reference point, respectively; the first image region is obtained by: acquiring a first characteristic line passing through the first reference point according to the second reference point and the third reference point; acquiring a second characteristic line passing through a first reference point according to the first characteristic line; and acquiring the first image area according to the first characteristic line and the second characteristic line, wherein the first characteristic line and the second characteristic line are vertically intersected.
In a specific implementation, the first reference point may be any one of the at least three reference points, and the first image region may be determined according to the first reference point and two reference points adjacent to the first reference point. And determining the first image area according to the three reference points, wherein the image intensity of the first image area integrates the change of the overall image intensity of the target layer within a certain range, and the accuracy of the determined first target image area and the second target image area is improved.
Further, for a structure having a plurality of layers that are coaxially nested, such as ONOP channel holes, since the plurality of layers are nested along the normal direction, if the target image area corresponding to the two boundaries of the target layer is determined by the image intensity of the first image area, the first image area may include a plurality of sub-image areas distributed along the normal direction, and the first target image and the second target image are acquired from the first image according to the image intensity variation of the plurality of sub-image areas.
That is, assuming that the image shape of the target layer is an approximate circular ring shape, the boundaries of the target layer and other structures may be two approximate circular boundaries of the approximate circular ring constituting the target layer, and at least three reference points are within the image area of the target layer. The first reference point, the second reference point and the third reference point all fall within the annular image area of the approximate annular ring, the first characteristic line may be parallel to a connecting line of the second reference point and the third reference point and pass through the first reference point (the first characteristic line may be regarded as an approximate tangent of an arc in which the at least three reference points pass through the first reference point), and the second characteristic line may perpendicularly intersect with the first characteristic line at the first reference point (the second characteristic line may be regarded as an approximate normal of an arc in which the at least three reference points pass through the first reference point).
The method for determining the first image area according to the first characteristic line and the second characteristic line may specifically be: two line segments (a third boundary and a fourth boundary) parallel to the first characteristic line and two line segments (a fifth boundary and a sixth boundary) parallel to the second characteristic line are acquired, and an area enclosed by the four line segments is used as a first image area. For example: acquiring a third characteristic line which is parallel to the second characteristic line and passes through a second reference point; acquiring a fourth characteristic line which is parallel to the second characteristic line and passes through a third reference point; acquiring a fifth characteristic line and a sixth characteristic line which are parallel to the first characteristic line and are a preset distance away from a first reference point; and determining an image area surrounded by the third characteristic line, the fourth characteristic line, the fifth characteristic line and the sixth characteristic line as a first image area. The distance between the fifth characteristic line and the first reference point in the first direction of the second characteristic line is a first distance, and the distance between the sixth characteristic line and the first reference point in the first direction of the second characteristic line is a second distance.
The lengths of the two line segments parallel to the first characteristic line may be equal to the length of the connection line between the second reference point and the third reference point, and the lengths of the two line segments parallel to the second characteristic line may be set by a user according to actual needs. That is, the first distance and the second distance may be set by a user.
For example, referring to fig. 2C, the partial image in (a) of fig. 2C includes the cross-sectional view shown in fig. 2B, and a plurality of reference points for indicating the positions of the HK-ALO layer on the cross-sectional view, and the contour lines of the plurality of reference points may be regarded as the reference contour lines of the HK-ALO layer, and taking three reference points in (a) of the partial rectangular frame in fig. 2C as an example, the three reference points may respectively correspond to points a, B, and C in (B) of fig. 2C, where point a may be a first reference point, point B may be a second reference point, and point C may be a third reference point.
In FIG. 2C (b), the first characteristic line may be a solid line x, which may be considered to be an approximate tangent of the HK-ALO reference contour line passing through point a, parallel to the line connecting points b and c; the second characteristic line may be the solid line y, i.e., the approximate normal of the HK-ALO reference profile line passing through point a. Further, the third characteristic line and the fourth characteristic line may be a dotted line 1 and a dotted line 2 in 2c (c); the fifth and sixth feature lines may be dotted lines 3 and 4, and thus the first image region may be an image region surrounded by dotted lines 1, 2, 3 and 4 in fig. 2c (c).
Further, if a first coordinate system is established with the solid line x as the x-axis, the solid line y as the y-axis, and the point a as the origin, and the distances between the fifth characteristic line and the sixth characteristic line and the first reference point are described, where the point a as the first reference point is (0, 0), the second characteristic line is the y-axis, and the direction indicated by the arrow in the y-axis in the first direction, the fifth characteristic line, i.e., the dashed line 3, may be represented by y ═ n, and the sixth characteristic line, i.e., the dashed line 4, may be represented by y ═ m, where n and m are integers not less than 0, that is, the fifth characteristic line and the sixth characteristic line fall on both sides of the first reference point in the first direction.
The units of the x axis and the y axis may be the number of the pixel points, and if the values of m and n are both 60, the first image area may be a rectangular area surrounded by four straight lines in fig. 2c (c). The first image area comprises the HK-ALO layer image and other images near the boundary of the HK-ALO layer image, and due to the fact that the image intensities of different layers are different, the boundary position of the HK-ALO layer can be determined through the intensity change in the first image area, and the thickness of the HK-ALO layer can be further determined.
Specifically, at least three reference points in the partial image in fig. 2C (a) may be obtained by preprocessing the user mark point, and may specifically be determined in the following manner:
acquiring a reference image comprising a user mark point; smoothing the user mark points to obtain a smooth contour line; at least three reference points are obtained from the flat pulley profile. When the user mark points are subjected to smoothing processing, a sagvol filter can be adopted, and the user mark points can be subjected to smoothing processing based on the distance between each user mark point and a predetermined layer centroid to obtain a smooth contour line; and interpolating the smooth contour line according to the arc length of the smooth contour line to obtain at least three reference points, wherein the distances between any two adjacent reference points of the at least three reference points are equal, that is, the at least three reference points on the smooth contour line are distributed at equal intervals.
For example, referring to fig. 2D, fig. 2D (a) is a reference image including user mark points, where the reference image is obtained after the user marks the positions of HK-ALO layers in the cross-sectional view of the target object as shown in fig. 2b (a); after the user mark points are smoothed, the obtained image may be as shown in fig. 2d (b), where the solid line in fig. 2d (b) is a smoothed contour line; FIG. 2D (c) is a partially enlarged view of FIG. 2D (b), in which the dotted lines connecting the points, i.e., the user mark points, and the solid lines are smooth contour lines; the resulting at least three reference points may be as shown in fig. 2D (D).
Referring to fig. 3A, fig. 3A is a block diagram illustrating functional units of a thickness detection apparatus according to an embodiment of the present disclosure. The thickness detection apparatus 30 may be applied to an electronic device as shown in fig. 1A, and the thickness detection apparatus 30 includes: a first acquisition unit 301, a second acquisition unit 302, and a determination unit 303, wherein,
the first acquiring unit 301 is configured to acquire a target image, where the target image includes a first reference point indicating a position of a target layer, and the target layer includes a first boundary and a second boundary;
the second obtaining unit 302 is configured to obtain, from a first image region in the target image, a first target image region corresponding to the first boundary and a second target image region corresponding to the second boundary, where the first image region includes the first reference point, according to an image intensity of the first image region;
the determining unit 303 is configured to determine the thickness of the target layer according to a distance between the first target image area and the second target image area.
In one possible example, the first image region includes N sub-image regions distributed along a first feature line, the first feature line passing through the first boundary, the target layer, and the second boundary, N being an integer not less than 2; the second obtaining unit 302 is specifically configured to: acquiring the average image intensity of each sub-image area in the N sub-image areas to obtain N average image intensities; obtaining an average intensity curve according to the N average image intensities; acquiring a first target intensity from the average intensity curve according to a local extreme value which is closest to a first end of the average intensity curve in the average intensity curve; and acquiring a second target intensity from the average intensity curve according to the maximum value in the average intensity curve, and acquiring the second target image area from the first image area.
In one possible example, the first target intensity is obtained from the average intensity curve according to a local extreme value of the average intensity curve closest to a first end of the average intensity curve; in terms of acquiring a second target intensity from the average intensity curve according to a most significant value in the average intensity curve, the second acquiring unit 302 is specifically configured to:
obtaining a first reference intensity and a second reference intensity from the average intensity curve, wherein the first reference intensity comprises a local extreme value nearest to a first end of the average intensity curve in the average intensity curve, and the second reference intensity comprises a maximum value in the average intensity curve; determining the first target strength according to a preset first parameter, the reference strength of the target layer and the first reference strength; and determining the second target strength according to a preset second parameter, the reference strength of the target layer and the second reference strength.
In one possible example, in a case where a plurality of the first target intensities are included in the average intensity curve, the plurality of first target intensities correspond one-to-one to a plurality of first candidate image regions in the first image region; in the aspect of acquiring the first target image region from the first image region according to the first target intensity, the second acquiring unit 302 is specifically configured to: acquiring the relative position relation between the first target image area and a first reference image area corresponding to the first reference strength according to the parameter value of the first parameter; acquiring the first target image area from the plurality of first candidate image areas according to the relative position relation between the first target image area and the first reference image area;
in a case where a plurality of the second target intensities are included in the average intensity curve, the plurality of second target intensities correspond one-to-one to a plurality of second candidate image regions in the first image region; in the aspect of acquiring the second target image region from the first image region according to the second target intensity, the second acquiring unit 302 is specifically configured to: acquiring the relative position relation between the second target image area and a second reference image area corresponding to the second reference intensity according to the parameter value of the second parameter; and acquiring the second target image area from the plurality of second candidate image areas according to the relative position relation between the second target image area and the second reference image area.
In one possible example, the thickness detection apparatus 30 further includes a first detection unit, configured to obtain a first target intensity from the average intensity curve according to a local extreme value of the average intensity curve closest to a first end of the average intensity curve; according to the most value in the average intensity curve, after second target intensity is obtained from the average intensity curve, according to the first target intensity, a first detection area is obtained from the average intensity curve; if the first detection area does not match a preset first condition, discarding the first reference point, where the first condition includes: the average intensity curve within the first detection region monotonically decreases.
In one possible example, the thickness detection apparatus 30 further includes a second detection unit, configured to obtain a first target intensity from the average intensity curve according to a local extreme value of the average intensity curve closest to a first end of the average intensity curve; according to the maximum value in the average intensity curve, after acquiring a second target intensity from the average intensity curve, acquiring a second detection area and a third detection area from the average intensity curve according to the first target intensity and the second target intensity; and if the difference value between the average intensity of the second detection area and the average intensity of the third detection area is smaller than a first preset threshold value, discarding the first reference point.
In one possible example, the thickness detection apparatus 30 further includes a third detection unit, configured to obtain a first target intensity from the average intensity curve according to a local extreme value of the average intensity curve closest to a first end of the average intensity curve; acquiring a fourth detection area from the average intensity curve according to the first target intensity and the second target intensity after acquiring the second target intensity from the average intensity curve according to the maximum value in the average intensity curve; if the difference between the average intensity of the fourth detection area and the first target intensity is smaller than a second preset threshold, and/or the difference between the average intensity of the fourth detection area and the second target intensity is smaller than a third preset threshold; the first reference point is discarded.
In one possible example, the target image includes at least three reference points for indicating a target layer location, the at least three reference points including: the first reference point, a second reference point and a third reference point adjacent to the first reference point, respectively; the thickness detection device 30 further includes a third acquiring unit, configured to acquire a first characteristic line passing through the first reference point according to the second reference point and the third reference point; acquiring a second characteristic line passing through a first reference point according to the first characteristic line; and acquiring the first image area according to the first characteristic line and the second characteristic line, wherein the first characteristic line and the second characteristic line are vertically intersected.
In the case of using an integrated unit, a block diagram of functional units of the thickness detection apparatus provided in the embodiment of the present application is shown in fig. 3B. In fig. 3B, the thickness detecting device includes: a processing module 310 and a communication module 311. The processing module 310 is used for controlling and managing the actions of the thickness detection device, for example, the steps performed by the first acquisition unit 301, the second acquisition unit 302, and the determination unit 303, and/or other processes for performing the techniques described herein. The communication module 311 is used to support the interaction between the thickness detection apparatus and other devices. As shown in fig. 3B, the thickness detection apparatus may further include a storage module 312, and the storage module 312 is used for storing program codes and data of the thickness detection apparatus.
The Processing module 310 may be a Processor or a controller, and may be, for example, a Central Processing Unit (CPU), a general-purpose Processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication module 311 may be a transceiver, an RF circuit or a communication interface, etc. The storage module 312 may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The thickness detecting apparatus can perform the steps performed by the electronic device in the thickness detecting method shown in fig. 2A.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description has been provided for embodiments of the present application, and specific examples have been applied to illustrate the principles and implementations of the present application, which are merely used to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A method of thickness detection, the method comprising:
acquiring a target image, wherein the target image comprises a first reference point used for indicating a target layer position of a target object, and the target layer comprises a first boundary and a second boundary;
acquiring a first target image area corresponding to the first boundary and a second target image area corresponding to the second boundary from the first image area according to the image intensity of the first image area in the target image, wherein the first image area comprises the first reference point;
determining the thickness of the target layer according to the distance between the first target image area and the second target image area.
2. The method according to claim 1, wherein the first image region includes N sub-image regions distributed along a first feature line, the first feature line passing through the first boundary, the target layer, and the second boundary, N being an integer not less than 2; the acquiring, according to the image intensity of a first image region in the target image, a first target image region corresponding to the first boundary and a second target image region corresponding to the second boundary from the first image region includes:
acquiring the average image intensity of each sub-image area in the N sub-image areas to obtain N average image intensities;
obtaining an average intensity curve according to the N average image intensities;
acquiring a first target intensity from the average intensity curve according to a local extreme value which is closest to a first end of the average intensity curve in the average intensity curve;
acquiring a second target intensity from the average intensity curve according to the maximum value in the average intensity curve;
acquiring the first target image area from the first image area according to the first target intensity;
and acquiring the second target image area from the first image area according to the second target intensity.
3. The method of claim 2, wherein the obtaining a first target intensity from the average intensity curve is based on a local extremum in the average intensity curve that is closest to a first end of the average intensity curve; acquiring a second target intensity from the average intensity curve according to the maximum value in the average intensity curve, wherein the acquiring comprises the following steps:
obtaining a first reference intensity and a second reference intensity from the average intensity curve, wherein the first reference intensity comprises a local extreme value nearest to a first end of the average intensity curve in the average intensity curve, and the second reference intensity comprises a maximum value in the average intensity curve;
acquiring the reference intensity of the target layer according to the first reference intensity, the second reference intensity and the average intensity curve;
determining the first target strength according to a preset first parameter, the reference strength of the target layer and the first reference strength;
and determining the second target strength according to a preset second parameter, the reference strength of the target layer and the second reference strength.
4. The method according to claim 3, wherein in a case where a plurality of the first target intensities are included in the average intensity curve, the plurality of first target intensities are in one-to-one correspondence with a plurality of first candidate image regions in the first image region; the acquiring the first target image area from the first image area according to the first target intensity includes:
acquiring the relative position relation between the first target image area and a first reference image area corresponding to the first reference strength according to the parameter value of the first parameter;
acquiring the first target image area from the plurality of first candidate image areas according to the relative position relation between the first target image area and the first reference image area;
in a case where a plurality of the second target intensities are included in the average intensity curve, the plurality of second target intensities correspond one-to-one to a plurality of second candidate image regions in the first image region; the acquiring the second target image area from the first image area according to the second target intensity includes:
acquiring the relative position relation between the second target image area and a second reference image area corresponding to the second reference intensity according to the parameter value of the second parameter;
and acquiring the second target image area from the plurality of second candidate image areas according to the relative position relation between the second target image area and the second reference image area.
5. A method according to claim 2 or 3, wherein said obtaining a first target intensity from said average intensity curve based on a local extremum in said average intensity curve closest to a first end of the average intensity curve; after obtaining a second target intensity from the average intensity curve according to the most significant value in the average intensity curve, the method further includes:
acquiring a first detection area from the average intensity curve according to the first target intensity;
if the first detection area does not match a preset first condition, discarding the first reference point, where the first condition includes: the average intensity curve within the first detection region monotonically decreases.
6. A method according to claim 2 or 3, wherein said obtaining a first target intensity from said average intensity curve based on a local extremum in said average intensity curve closest to a first end of the average intensity curve; after obtaining a second target intensity from the average intensity curve according to the most significant value in the average intensity curve, the method further includes:
acquiring a second detection area and a third detection area from the average intensity curve according to the first target intensity and the second target intensity;
and if the difference value between the average intensity of the second detection area and the average intensity of the third detection area is smaller than a first preset threshold value, discarding the first reference point.
7. A method according to claim 2 or 3, wherein said obtaining a first target intensity from said average intensity curve based on a local extremum in said average intensity curve closest to a first end of the average intensity curve; after obtaining a second target intensity from the average intensity curve according to the most significant value in the average intensity curve, the method further includes:
acquiring a fourth detection area from the average intensity curve according to the first target intensity and the second target intensity;
if the difference between the average intensity of the fourth detection area and the first target intensity is smaller than a second preset threshold, and/or the difference between the average intensity of the fourth detection area and the second target intensity is smaller than a third preset threshold;
the first reference point is discarded.
8. The method of any of claims 1-3, wherein the target image includes at least three reference points for indicating a target layer location, the at least three reference points including: the first reference point, a second reference point and a third reference point adjacent to the first reference point, respectively;
the first image region is obtained by:
acquiring a first characteristic line passing through the first reference point according to the second reference point and the third reference point;
acquiring a second characteristic line passing through a first reference point according to the first characteristic line;
and acquiring the first image area according to the first characteristic line and the second characteristic line, wherein the first characteristic line and the second characteristic line are vertically intersected.
9. A thickness detection apparatus, comprising: a first acquisition unit, a second acquisition unit, and a determination unit, wherein,
the first acquisition unit is used for acquiring a target image, the target image comprises a first reference point used for indicating the position of a target layer, and the target layer comprises a first boundary and a second boundary;
the second acquiring unit is configured to acquire a first target image region corresponding to the first boundary and a second target image region corresponding to the second boundary from the first image region according to an image intensity of the first image region in the target image, where the first image region includes the first reference point;
the determining unit is configured to determine the thickness of the target layer according to a distance between the first target image area and the second target image area.
10. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-8.
11. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-8.
CN202110527783.9A 2021-05-14 2021-05-14 Thickness detection method and related product Active CN113240724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110527783.9A CN113240724B (en) 2021-05-14 2021-05-14 Thickness detection method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110527783.9A CN113240724B (en) 2021-05-14 2021-05-14 Thickness detection method and related product

Publications (2)

Publication Number Publication Date
CN113240724A true CN113240724A (en) 2021-08-10
CN113240724B CN113240724B (en) 2022-03-25

Family

ID=77134305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110527783.9A Active CN113240724B (en) 2021-05-14 2021-05-14 Thickness detection method and related product

Country Status (1)

Country Link
CN (1) CN113240724B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564837A (en) * 2022-11-17 2023-01-03 歌尔股份有限公司 Visual positioning method, device and system
CN117091516A (en) * 2022-05-12 2023-11-21 广州镭晨智能装备科技有限公司 Method, system and storage medium for detecting thickness of circuit board protective layer

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100125202A1 (en) * 2008-11-19 2010-05-20 Medison Co., Ltd. Region setting for intima media thickness measurement in an ultrasound system
CN102032875A (en) * 2009-09-28 2011-04-27 王吉林 Image-processing-based cable sheath thickness measuring method
CN102252623A (en) * 2011-06-24 2011-11-23 西安工程大学 Measurement method for lead/ground wire icing thickness of transmission line based on video variation analysis
CN103687718A (en) * 2011-07-20 2014-03-26 株式会社普利司通 Bead filler testing device, program for bead filler testing, and bead filler testing method
CN104820819A (en) * 2014-02-04 2015-08-05 摩如富公司 Method for validating the use of a real finger as a support for a fingerprint
CN106441125A (en) * 2016-11-01 2017-02-22 淮阴师范学院 Thin film thickness measurement method and system
CN106485203A (en) * 2016-09-19 2017-03-08 天津大学 Carotid ultrasound image Internal-media thickness measuring method and system
CN106601642A (en) * 2015-10-15 2017-04-26 三星电子株式会社 Method of measuring thickness, method of processing image and electronic system performing the same
CN109844494A (en) * 2016-10-06 2019-06-04 艾瑞斯国际有限公司 Dynamic focusing system and method
US20200167914A1 (en) * 2017-07-19 2020-05-28 Altius Institute For Biomedical Sciences Methods of analyzing microscopy images using machine learning
WO2020204277A1 (en) * 2019-04-05 2020-10-08 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method thereof

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100125202A1 (en) * 2008-11-19 2010-05-20 Medison Co., Ltd. Region setting for intima media thickness measurement in an ultrasound system
CN102032875A (en) * 2009-09-28 2011-04-27 王吉林 Image-processing-based cable sheath thickness measuring method
CN102252623A (en) * 2011-06-24 2011-11-23 西安工程大学 Measurement method for lead/ground wire icing thickness of transmission line based on video variation analysis
CN103687718A (en) * 2011-07-20 2014-03-26 株式会社普利司通 Bead filler testing device, program for bead filler testing, and bead filler testing method
CN104820819A (en) * 2014-02-04 2015-08-05 摩如富公司 Method for validating the use of a real finger as a support for a fingerprint
CN106601642A (en) * 2015-10-15 2017-04-26 三星电子株式会社 Method of measuring thickness, method of processing image and electronic system performing the same
CN106485203A (en) * 2016-09-19 2017-03-08 天津大学 Carotid ultrasound image Internal-media thickness measuring method and system
CN109844494A (en) * 2016-10-06 2019-06-04 艾瑞斯国际有限公司 Dynamic focusing system and method
CN106441125A (en) * 2016-11-01 2017-02-22 淮阴师范学院 Thin film thickness measurement method and system
US20200167914A1 (en) * 2017-07-19 2020-05-28 Altius Institute For Biomedical Sciences Methods of analyzing microscopy images using machine learning
WO2020204277A1 (en) * 2019-04-05 2020-10-08 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAI-SHAN WU ET AL: "Segmentation and thickness measurement of glomerular basement membranes from electron microscopy images", 《MICROSCOPY》 *
李桂运 等: "基于柱面透镜Otto结构SPR效应的金属薄膜厚度测量方法", 《中国激光》 *
王吉林 等: "基于图像处理的电缆护套厚度精密测量", 《电 子 器 件》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117091516A (en) * 2022-05-12 2023-11-21 广州镭晨智能装备科技有限公司 Method, system and storage medium for detecting thickness of circuit board protective layer
CN117091516B (en) * 2022-05-12 2024-05-28 广州镭晨智能装备科技有限公司 Method, system and storage medium for detecting thickness of circuit board protective layer
CN115564837A (en) * 2022-11-17 2023-01-03 歌尔股份有限公司 Visual positioning method, device and system

Also Published As

Publication number Publication date
CN113240724B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN113240724B (en) Thickness detection method and related product
US20230086961A1 (en) Parallax image processing method, apparatus, computer device and storage medium
WO2017043258A1 (en) Calculating device and calculating device control method
WO2017033422A1 (en) Image processing device and image processing method
KR20160012064A (en) Electron beam drawing device, electron beam drawing method, and recrding media
JP5221584B2 (en) Image processing apparatus, image processing method, and image processing program
JP5810031B2 (en) Semiconductor circuit pattern measuring apparatus and method
JP5114302B2 (en) Pattern inspection method, pattern inspection apparatus, and pattern processing apparatus
CN113034527B (en) Boundary detection method and related product
CN112050741B (en) Method for measuring period length of periodic grid array
JP7152506B2 (en) Imaging device
CN117115194A (en) Contour extraction method, device, equipment and medium based on electron microscope image
US8855401B2 (en) Methods and systems involving measuring complex dimensions of silicon devices
CN114494118A (en) Method for detecting width of target object and method for detecting length of target object
CN114936987A (en) Lens distortion correction method, device, equipment and storage medium
CN113256700B (en) Method and device for detecting thickness of layer, electronic equipment and readable storage medium
US20110024621A1 (en) Scanning electron microscope control device, control method, and program
US11055852B2 (en) Fast automatic trimap generation and optimization for segmentation refinement
CN108036736B (en) Groove curvature measuring method and device and defect number predicting method and device
TWI826185B (en) External parameter determination method and image processing device
CN117215164B (en) Photoetching simulation method and device
CN113989383B (en) Method for improving accuracy and range of binocular range
CN118533098B (en) Calibration method, target and system of laser profiler
CN112237434B (en) Method for moving focus of computer tomography apparatus, medium and computer tomography apparatus
CN115905237B (en) Image processing method, device, HUD and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant