CN115393371A - Image processing method, image processing apparatus, and optical detection device - Google Patents

Image processing method, image processing apparatus, and optical detection device Download PDF

Info

Publication number
CN115393371A
CN115393371A CN202110547043.1A CN202110547043A CN115393371A CN 115393371 A CN115393371 A CN 115393371A CN 202110547043 A CN202110547043 A CN 202110547043A CN 115393371 A CN115393371 A CN 115393371A
Authority
CN
China
Prior art keywords
image
processed
pixel
foreground
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110547043.1A
Other languages
Chinese (zh)
Inventor
冯子寅
王豪
季敏标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Junzhen Life Science Co ltd
Original Assignee
Shanghai Junzhen Life Science Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Junzhen Life Science Co ltd filed Critical Shanghai Junzhen Life Science Co ltd
Priority to CN202110547043.1A priority Critical patent/CN115393371A/en
Priority to PCT/CN2021/099632 priority patent/WO2022241879A1/en
Publication of CN115393371A publication Critical patent/CN115393371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method, an image processing apparatus, and an optical detection device. The image processing method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises a foreground area and a background area with different brightness; carrying out binarization processing on the image to be processed, and determining one or more foreground areas in the image to be processed according to a binarization processing result; and determining whether the foreground region is a target object image or not for each foreground region of the one or more foreground regions, and determining a target object parameter according to the foreground region under the condition that the foreground region is the target object image.

Description

Image processing method, image processing apparatus, and optical detection device
Technical Field
The present disclosure relates to the field of optical detection technologies, and in particular, to an image processing method, an image processing apparatus, and an optical detection device.
Background
Optical detection is increasingly used in the fields of chemistry, biology, and the like. Through optical detection, counting, morphology observation, position determination and the like can be carried out on target objects (such as cells, cell fragments, biological particles of yeast, algae and the like) in the sample, so as to acquire relevant properties of the sample. However, in the existing optical detection, in order to better observe the sample, the target object, especially the transparent target object, often needs to be dyed or the like in advance, which causes the process of optical detection to become complicated, and is inconvenient for directly carrying out online and in-situ detection on the sample, and in the process of dyeing or the like, other changes of the sample may be caused, which causes the detection result to be inaccurate.
Disclosure of Invention
An object of the present disclosure is to provide an image processing method, an image processing apparatus, and an optical detection device.
According to a first aspect of the present disclosure, there is provided an image processing method including:
acquiring an image to be processed, wherein the image to be processed comprises a foreground area and a background area with different brightness;
carrying out binarization processing on the image to be processed, and determining one or more foreground areas in the image to be processed according to a binarization processing result; and
and determining whether the foreground area is a target object image or not aiming at each foreground area in the one or more foreground areas, and determining target object parameters according to the foreground area under the condition that the foreground area is the target object image.
In some embodiments, the image to be processed is obtained by taking a picture of a sample capable of containing the target object under a combination of bright field illumination and dark field illumination.
In some embodiments, the target object in the sample has a different refractive index than other portions in the sample, the target object is transparent, and the target object is not stained.
In some embodiments, the binarizing the image to be processed and determining one or more foreground regions in the image to be processed according to the result of the binarizing includes:
respectively comparing the gray value of each pixel in the image to be processed with a preset gray threshold value;
determining whether the pixel belongs to a first pixel of a foreground region or a second pixel of a background region according to a comparison result of the gray value of the pixel and the preset gray threshold value;
dividing continuously distributed first pixels into the same foreground area;
wherein different foreground regions in the image to be processed are separated by background regions.
In some embodiments, the binarizing the image to be processed and determining one or more foreground regions in the image to be processed according to the result of the binarizing further comprises:
and determining a preset gray threshold corresponding to the image to be processed according to the brightness distribution of the image to be processed.
In some embodiments, determining whether a pixel belongs to a first pixel of a foreground region or a second pixel of a background region according to a comparison of a gray value of the pixel with the preset gray threshold value comprises:
when the gray value of the pixel is larger than the preset gray threshold value, determining that the pixel is a first pixel belonging to a foreground area;
and when the gray value of the pixel is less than or equal to the preset gray threshold value, determining that the pixel is a second pixel belonging to a background area.
In some embodiments, the binarizing the image to be processed and determining one or more foreground regions in the image to be processed according to the result of the binarizing further includes:
correcting a defect of at least one of the one or more foreground regions.
In some embodiments, correcting the defect of at least one of the one or more foreground regions comprises:
for at least one second pixel in the image to be processed, comparing the gray value of the second pixel with the average gray value of a preset number of pixels around the second pixel;
and determining whether to correct the second pixel into the first pixel according to the comparison result of the gray value of the second pixel and the average gray value.
In some embodiments, determining whether to modify the second pixel to be the first pixel based on the comparison of the gray value of the second pixel to the average gray value comprises:
and when the gray value of the second pixel is less than or equal to the average gray value, correcting the second pixel into the first pixel.
In some embodiments, correcting the defect of at least one of the one or more foreground regions comprises:
determining, for at least one foreground region of the one or more foreground regions, whether an edge of the foreground region coincides with a preset shape edge;
when the edge of the foreground area is not in accordance with the edge of a preset shape, performing expansion processing on the foreground area to correct the edge of the foreground area; and
and corroding the expanded foreground region to make the area of the foreground region after correction consistent with the area of the foreground region before correction.
In some embodiments, the pre-set shaped edge comprises at least one of a circular arc shaped edge and an elliptical arc shaped edge.
In some embodiments, the foreground region is represented by coordinates of pixels located on an edge of the foreground region.
In some embodiments, for each foreground region of the one or more foreground regions, determining whether the foreground region is a target object image, and in the case that the foreground region is a target object image, determining the target object parameter from the foreground region comprises:
determining whether the foreground area is a target object image according to the edge of the foreground area;
when the foreground area is a target object image, judging whether target objects contained in the foreground area are in an agglomeration state or not;
when the target objects contained in the foreground area are in an aggregation state, separating the multiple target objects in the aggregation state according to a preset algorithm, and determining a target object parameter of each target object in the multiple target objects;
and when the target objects contained in the foreground area are not in the aggregation state, determining the target object parameters of the target objects.
In some embodiments, determining whether the foreground region is a target object image according to the edge of the foreground region comprises:
determining an envelope rectangle which can contain the foreground region and has a minimum size according to the edge of the foreground region;
comparing the width of the envelope rectangle with a first preset threshold value;
when the width of the envelope rectangle is larger than or equal to the first preset threshold, determining that the foreground area is a target object image;
and when the width of the envelope rectangle is smaller than the first preset threshold, determining that the foreground area is not the target object image.
In some embodiments, when the foreground region is a target object image, determining whether target objects contained in the foreground region are in an aggregated state comprises:
respectively comparing the width of the envelope rectangle with a second preset threshold value and comparing the length of the envelope rectangle with the second preset threshold value;
when the width and the length of the envelope rectangle are both smaller than or equal to the second preset threshold, determining that the target objects contained in the foreground area are not in an agglomeration state;
when at least one of the width and the length of the envelope rectangle is greater than the second preset threshold, determining that the target objects contained in the foreground area are in an agglomeration state;
wherein the second preset threshold is greater than or equal to the first preset threshold.
In some embodiments, when the target objects included in the foreground region are in an aggregated state, separating the plurality of target objects in the aggregated state according to a preset algorithm includes:
and separating the plurality of target objects in the agglomeration state according to at least one algorithm of a Hough circle recognition algorithm, a watershed algorithm, a hot spot detection algorithm, a support vector machine algorithm and a u-net algorithm.
In some embodiments, the target object parameters include at least one of a target object count, a target object size, and a target object position.
In some embodiments, before performing binarization processing on the image to be processed and determining one or more foreground regions in the image to be processed according to a result of the binarization processing, the image processing method further includes performing at least one of the following pre-processing on the image to be processed:
adjusting at least one of contrast and brightness of the image to be processed;
carrying out graying processing on the image to be processed;
normalizing the image to be processed; and
and denoising the image to be processed.
In some embodiments, the image processing method further comprises:
after determining target object parameters for all foreground regions in the image to be processed, displaying the target object parameters in the form of at least one of an annotation graph and an annotation list.
In some embodiments, the target object comprises a cell.
According to a second aspect of the present disclosure, an image processing apparatus is proposed, the image processing apparatus comprising a processor and a memory, the memory having stored thereon instructions which, when executed by the processor, carry out the steps of the image processing method as described above.
According to a third aspect of the present disclosure, there is provided an optical detection apparatus comprising:
a light source apparatus, the light source apparatus comprising:
an illumination light source configured to generate illumination light; and
the diaphragm, the diaphragm is located illumination source's outgoing light path, the diaphragm includes:
a shading screen configured to shade part of the illumination light;
a first light-transmitting portion opened on the light-shielding screen and covering a center of the diaphragm, the first light-transmitting portion being configured to transmit a part of the illumination light to form bright field illumination on the sample; and
a second light-transmitting portion opened on the shading screen and located at a periphery of the first light-transmitting portion, the second light-transmitting portion being configured to transmit a portion of the illumination light to form dark field illumination of the sample;
a sample stage configured to carry the sample;
an imaging device configured to image the sample under illumination by the light source device to produce an image to be processed, the imaging device comprising an objective lens; and
the image processing apparatus as described above.
In some embodiments, a distance R1 between an outer edge of the first light-transmitting portion and a center of the stop, a distance l between the stop and the sample stage, and a numerical aperture n of the objective lens satisfy the following relationship:
R1≤l·tg[arcsin(n)/3]。
in some embodiments, a distance R2 between an inner edge of the second light-transmitting portion and a center of the stop, a distance l between the stop and the sample stage, and a numerical aperture n of the objective lens satisfy the following relationship:
R2>l·tg[arcsin(n)]。
according to a fourth aspect of the present disclosure, a computer-readable storage medium is presented, having stored thereon instructions which, when executed, implement the steps of the image processing method as described above.
According to a fifth aspect of the present disclosure, a computer program product is presented, comprising instructions which, when executed by the processor, implement the steps of the image processing method as described above.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
FIG. 1 shows a flow diagram of an image processing method according to an example embodiment of the present disclosure;
fig. 2 shows a schematic structural diagram of a light source apparatus, a sample and an objective lens according to an exemplary embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of the structural parameters of the light source device, sample and objective lens of FIG. 2;
fig. 4 shows a flowchart illustrating step S200 in an image processing method according to an exemplary embodiment of the present disclosure;
FIG. 5 illustrates an image to be processed in a specific example of the present disclosure;
fig. 6 shows a flowchart illustrating step S200 in an image processing method according to another exemplary embodiment of the present disclosure; (ii) a
Fig. 7 shows a flowchart illustrating step S300 in an image processing method according to an exemplary embodiment of the present disclosure;
fig. 8 shows a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure.
Note that in the embodiments described below, the same reference numerals are used in common between different drawings to denote the same portions or portions having the same functions, and a repetitive description thereof will be omitted. In this specification, like reference numerals and letters are used to designate like items, and therefore, once an item is defined in one drawing, further discussion thereof is not required in subsequent drawings.
For convenience of understanding, the positions, sizes, ranges, and the like of the respective structures shown in the drawings and the like do not sometimes indicate actual positions, sizes, ranges, and the like. Therefore, the disclosed invention is not limited to the positions, dimensions, ranges, etc., disclosed in the drawings and the like. Furthermore, the figures are not necessarily to scale, some features may be exaggerated to show details of particular components.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of parts and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. That is, the chip testing method and the computing chip herein are shown by way of example to illustrate different embodiments of the circuit or method in the present disclosure and are not intended to be limiting. Those skilled in the art will appreciate that they are merely illustrative of ways that the invention may be practiced, not exhaustive.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In order to achieve online, in-situ optical detection of a target object, in particular of a transparent target object, the present disclosure proposes an image processing method, an image processing device and an optical detection apparatus. In the technical solution of the present disclosure, a target object possibly contained therein is analyzed based on a brightness distribution in an image to be processed obtained from a sample. Hereinafter, the technical solution of the present disclosure will be explained in detail mainly taking as an example that the target object is a transparent living cell, however, it can be understood by those skilled in the art that the target object may also be other biological particles such as dead cells, algae, yeast, and the like.
In an exemplary embodiment of the present disclosure, as shown in fig. 1, an image processing method may include:
step S100, acquiring an image to be processed, wherein the image to be processed comprises a foreground area and a background area with different brightness.
In particular, due to the difference in the refractive index of light between the target object and other objects (e.g., impurities, substrates, etc.) in the sample, there may be a certain brightness distribution in the image to be processed resulting from imaging the sample. The brightness of the foreground region may be brighter or darker than that of the background region, and in the subsequent step, depending on the optical property of the target object to be analyzed, the corresponding foreground region may be extracted from the image to be processed, and the other part of the image to be processed may be taken as the background region. It is to be noted that when a plurality of target objects contained therein are analyzed for the same image to be processed, the foreground region and the background region determined for each target object in the subsequent steps may be different.
In some embodiments, a sample that may contain a target object may be photographed under a combination of bright field illumination and dark field illumination to obtain an image to be processed.
As shown in fig. 2, the light source device for illumination may include an illumination light source (not shown in the figure) and a diaphragm 112.
Wherein the illumination source may be configured to produce illumination light, typically in the visible wavelength band. In some embodiments, the illumination source may include at least one of a thermal radiation source and a light emitting diode to produce visible light, such as white light or near white light, to facilitate optical viewing of the sample.
The diaphragm 112 may be disposed on an exit light path of the illumination light source, and may generate a composite illumination combining bright field illumination and dark field illumination at a sample position where the sample 200 is located by adjusting a transmission portion of the illumination light. Specifically, the diaphragm 112 may include a shading screen 112a, a first light-transmitting portion 112b, and a second light-transmitting portion 112c. The light shielding screen 112a may be made of a non-transparent material and configured to shield part of the illumination light. The first light-transmitting portion 112b is opened on the light-shielding mask 112a, covers the center of the stop 112, and is configured to transmit part of the illumination light to form bright field illumination. The second light-transmitting portion 112c is also opened on the shading screen 112a, is located at the periphery of the first light-transmitting portion 112b, and is configured to transmit part of the illumination light to form dark field illumination.
As shown in fig. 2, when the illumination light incident on the specimen 200 at a small angle through the first light-transmitting portion 112b interacts with the specimen 200, a part of the direct light 410 (white region) close to the optical axis, which corresponds to the bright field illumination, is generated, and the brightness of the entire observation field can be enhanced to improve the observation effect. When the illumination light incident on the specimen 200 at a large angle through the second light transmission part 112c interacts with the specimen 200, scattered light 420 (gray area) relatively far from the optical axis is generated, which corresponds to dark field illumination, and the edge of the transparent object in the specimen can be better displayed to improve the observation effect. Furthermore, as can be seen in fig. 2, there may be a portion of the direct light 410 outside the scattered light 420, which will not be collected by the objective lens 300 by the relevant parameters of the objective lens 300, and will not substantially affect the observation, and therefore will not be described in detail.
In the light source apparatus, by adjusting the relative sizes of the first and second light-transmitting portions 112b and 112c, the ratio of the bright field illumination and the dark field illumination with respect to each other in the composite illumination may be changed to achieve a desired illumination effect. Specifically, as the relative light-passing size of the first light-transmitting portion 112b increases, the amount of transmitted illumination light incident on the sample at a small angle increases, the proportion of bright field illumination increases, the brightness of the observed field of view will be brighter, but the imaging effect of the transparent target object may become poor; as the relative light-transmitting size of the second light-transmitting portion 112c increases, the amount of transmitted illumination light incident on the sample at a large angle increases, the proportion of dark field illumination increases, the imaging effect of the transparent object becomes better, but the imaging effect of the non-transparent object or the object to be colored, etc., may deteriorate, and the overall brightness of the field of view becomes lower.
Fig. 3 shows some relevant structural parameters of the light source device, the sample and the objective lens, and by adjusting the mutual relationship among them, a more ideal illumination effect can be obtained.
In some embodiments, to avoid excessive proportion of bright field illumination in composite imaging, the following relationship is generally required: the angle β 1 is not more than 1/3 α, wherein tan β = R1/l, sin α = n, R1 is the distance between the outer edge of the first light-transmitting portion 112b and the center of the diaphragm 112, l is the distance between the diaphragm 112 and the sample position, and n is the numerical aperture of the objective lens 300 used in cooperation with the light source device, and is derived as follows: r1 is less than or equal to l.tg [ arcsin (n)/3 ].
In some embodiments, in order to avoid interference with imaging by the direct light entering the objective lens 300 near the outside, i.e. to enable the scattered light generated by the interaction with the sample to cover the aperture of the objective lens 300, the following relationship is generally required: angle β 2> angle α, where tan β = R2/l, sin α = n, R2 is a distance between an inner side edge of the second light-transmitting portion 112c and the center of the diaphragm 112, l is a distance between the diaphragm 112 and the sample position, and n is a numerical aperture of the objective lens 300 used in cooperation with the light source device, which is derived: r2> l · tg [ arcsin (n) ].
In addition, as shown in fig. 2 and fig. 3, in some embodiments, the light source apparatus may further include a light attenuating element 114, and the light attenuating element 114 may be disposed on an outgoing light path of the illumination light source and configured to reduce the brightness of the illumination light, so as to improve the effect of the composite illumination and avoid bright field over-brightness. The light attenuating element 114 may be a ground glass or polarizer, etc., and may be disposed at one or more locations between the illumination source and the aperture 112 or between the aperture 112 and the first lens 113. To facilitate determination of the size and position of the optical attenuation 114 on the optical axis, the optical attenuation 114 may be located on the optical path through which the outgoing light in a collimated state passes.
By arranging the diaphragm in the light source device, the composite illumination combining dark field illumination and bright field illumination can be formed on the sample, and the transparent target object can be displayed more clearly on the premise of ensuring the imaging brightness. Furthermore, under the condition of composite lighting, the size of the target object image and the size of the actual target object can be similar or even basically consistent, so that the size of the target object can be directly determined according to the size of the target object image. Alternatively, the size of the target object image may be multiplied by the size conversion factor based on the determined size conversion factor, thereby obtaining the size of the target object.
When the light source equipment is adopted to illuminate a sample to obtain an image to be processed, a clear image to be processed can be generated, so that a target object does not need to be dyed, online and in-situ optical measurement is realized, the measurement process is simplified, and the measurement effect is improved.
After the image to be processed is acquired, the image to be processed can be directly subjected to binarization processing and analyzed for target objects possibly contained in the sample and corresponding target object parameters. Or, before the binarization processing is carried out, some preprocessing can be carried out on the image to be processed so as to optimize the effects of subsequent processing and analysis.
In particular, in some embodiments, at least one of the contrast and the brightness of the image to be processed may be adjusted to make the brightness distribution in the image to be processed more reasonable, facilitating better distinguishing between foreground and background regions in subsequent steps.
In some embodiments, the image to be processed may be grayed out, so as to erase unnecessary color information, so as to simplify the image data to be processed, and at the same time, help to better distinguish the foreground region from the background region.
In some embodiments, the image to be processed may be normalized, so that the gray values of the pixels in the image to be processed are distributed within a preset gray value range, thereby simplifying data processing.
In some embodiments, the image to be processed may also be subjected to denoising processing, so as to reduce possible defects in the image to be processed before binarization processing is performed on the image to be processed, thereby helping to better distinguish the foreground region from the background region.
Returning to fig. 1, the image processing method may further include:
and S200, performing binarization processing on the image to be processed, and determining one or more foreground areas in the image to be processed according to the result of the binarization processing.
The foreground region may include a target object, but it is understood that in some cases, the foreground region may also include other objects such as impurities; while the background area typically includes a substrate, such as a solution carrying the target object, a slide, etc.
In some embodiments, the binarization processing may be performed based on a preset grayscale threshold value, specifically, as shown in fig. 4, step S200 may include:
step S210, respectively comparing the gray value of each pixel in the image to be processed with a preset gray threshold value;
step S220, determining whether the pixel belongs to a first pixel of a foreground region or a second pixel of a background region according to a comparison result of the gray value of the pixel and a preset gray threshold;
step S230, dividing the first pixels which are continuously distributed into the same foreground area;
wherein different foreground regions in the image to be processed may be separated by background regions. For example, in fig. 5, the image to be processed may include foreground regions 510 and background regions 520 having different brightness, and the different foreground regions 510 may be spaced apart by the background regions 520. After the image to be processed in fig. 5 is subjected to binarization processing, the gray-scale value of the foreground region 510 may be 1, and the gray-scale value of the background region 520 may be 0.
In addition, when one or more foreground regions in the image to be processed are determined according to the result of the binarization processing, the foreground regions may also be determined using, for example, a u-net algorithm, a watershed algorithm, or the like. For example, a u-net algorithm may be used to determine the seeds of the foreground region, and then further employ a watershed algorithm to determine the foreground region.
In some embodiments, before comparing the gray value of each pixel in the image to be processed with the preset gray threshold, the preset gray threshold corresponding to the image to be processed may also be determined according to the brightness distribution of the image to be processed. This is because the concentration of the target object may vary widely from sample to sample, which results in the overall brightness of the different images to be processed being widely varied. For example, when the concentration of living cells in a sample is high, the overall brightness of the image to be processed is generally bright; whereas when the concentration of living cells in the sample is low, the overall brightness of the image to be processed may be dark. In view of such differences, if a fixed preset grayscale threshold is employed for different to-be-processed images, it may be difficult to accommodate such a change in overall brightness, resulting in inappropriate binarization, e.g., erroneously determining substantially the entire to-be-processed image as a foreground region or a background region.
The preset gray threshold corresponding to the image to be processed may be determined according to the brightness distribution of the image to be processed in various ways. In some embodiments, the preset gray level threshold may be calculated from a gray level distribution in the image to be processed (e.g., characterized by parameters such as a gray level histogram, average brightness, or brightness variance) based on a function that has been pre-fitted, which may be a linear function, a quadratic function, other polynomial function, an exponential function, a logarithmic function, or the like. Or, in some embodiments, a preset grayscale threshold may also be obtained by using a model trained based on a machine learning method, so as to adapt to the change of the overall brightness of different images to be processed, and optimize binarization processing.
When the brightness of the target object is higher relative to the brightness of other components in the sample (e.g., when the target object is a living cell), determining whether the pixel belongs to the first pixel of the foreground region or the second pixel of the background region according to the comparison result of the gray value of the pixel and the preset gray threshold may include:
when the gray value of the pixel is larger than a preset gray threshold, determining that the pixel is a first pixel belonging to the foreground area;
and when the gray value of the pixel is less than or equal to a preset gray threshold value, determining that the pixel is a second pixel belonging to the background area.
Of course, in other embodiments, when the brightness of the target object is lower relative to the brightness of other components in the sample (e.g., the target object is dead cells darker than the background area), the opposite criteria may also be used, namely:
when the gray value of the pixel is smaller than or equal to a preset gray threshold value, determining that the pixel is a first pixel belonging to the foreground area;
and when the gray value of the pixel is larger than a preset gray threshold value, determining that the pixel is a second pixel belonging to the background area.
In some embodiments, the obtained foreground region corresponding to the target object may have defects in consideration of factors such as a change in local brightness of the target object during the acquisition, preprocessing, or binarization processing of the image to be processed. Taking the target object as a transparent living cell as an example, in the case that the substrate of the sample is not uniform, the illumination light changes, and there is a defect in the preprocessing or binarization processing, the living cell may not present a complete filled circle or ellipse, but present a "C" shape with a gap or an "O" shape with a dark middle edge being bright, which may adversely affect the subsequent determination of the target object parameters. Therefore, as shown in fig. 6, in order to better optimize the determination of the foreground region, step S200 may further include:
and step S240, correcting the defect of at least one foreground area in the one or more foreground areas.
Specifically, the defect of the foreground region can be corrected by means of gradient correction, expansion of the foreground region, etching treatment, and the like.
In some embodiments, when gradient correction is employed, it may be determined whether a pixel should be included in the foreground region based on a gray value distribution of pixels within a certain range around the pixel. For example, correcting the defect of at least one of the one or more foreground regions may include: for at least one second pixel in the image to be processed, comparing the gray value of the second pixel with the average gray value of a preset number of pixels around the second pixel; and determining whether to correct the second pixel into the first pixel according to the comparison result of the gray value of the second pixel and the average gray value. Specifically, determining whether to correct the second pixel to the first pixel according to the result of comparing the gray value of the second pixel with the average gray value may include: and when the gray value of the second pixel is less than or equal to the average gray value, correcting the second pixel into the first pixel. Alternatively, in some other embodiments, the opposite determination criterion may be adopted, that is, when the gray-scale value of the second pixel is greater than the average gray-scale value, the second pixel is modified to be the first pixel. The O-shaped defects can be well corrected by adopting a gradient correction mode. It should be noted that the gray-level value here may refer to the original gray-level value of the pixel, and may include 256 integer values from 0 to 255, for example, or may refer to the gray-level value obtained after the binarization processing, and may include only two values, i.e., 0 and 1, for example.
In a specific example, when the gray value of a certain second pixel is 0 and the average gray value of eight pixels surrounding the pixel in the nearest neighborhood is 1, it can be considered that the second pixel has a defect, which should be corrected to the first pixel to be consistent with the surrounding pixels. When the gray scale value of a certain second pixel is 0 and the average gray scale value of eight pixels surrounding the pixel is also 0, it can be considered that the second pixel does not need to be corrected.
In some embodiments, the foreground region may also be modified according to a known shape of the target object. For example, when the target object is a living cell, the edge of the foreground region should be in a circular arc shape or an elliptical arc shape, and then correcting the defect of at least one foreground region of the one or more foreground regions may include: determining, for at least one foreground region of the one or more foreground regions, whether an edge of the foreground region coincides with a preset shape edge; when the edge of the foreground area is not consistent with the edge of the preset shape, performing expansion processing on the foreground area to correct the edge of the foreground area; and carrying out corrosion treatment on the expanded foreground region to enable the area of the foreground region after correction to be consistent with the area of the foreground region before correction. The expansion processing can expand the highlight area or the white part of the image, and the operation result image is larger than the highlight area of the original image; the erosion treatment can reduce and thin the highlight area or white part in the image, and the operation result image is smaller than the highlight area of the original image. After the expansion and erosion process, the gap at the edge of the foreground region, such as the "C" shape, can be filled and its area can be kept the same as the area before the process.
In some embodiments of the disclosure, the respective foreground region may be represented by coordinates of pixels located on an edge of the foreground region, e.g. multiple foreground regions may be stored in a respective list, each entry in the list may represent one foreground region, and a single entry comprises coordinates of all pixels located on an edge of the respective foreground region. In subsequent processing, the target objects and target object parameters that may be contained in each foreground region may be analyzed item by item based on the list.
Returning to fig. 1, the image processing method may further include:
step S300, determining whether the foreground region is a target object image for each foreground region of the one or more foreground regions, and determining a target object parameter according to the foreground region when the foreground region is the target object image.
It should be noted that, when analyzing a foreground region, it is necessary to determine whether the foreground region corresponds to a target object, and generally, it is also necessary to consider a situation that the foreground region includes a single target object and multiple target objects. For example, when a living cell is a target object, each foreground region may correspond to only a single living cell when the concentration of the living cell in the sample is low, and at least a part of the foreground region may correspond to a plurality of living cells when the concentration of the living cell in the sample is high.
In some embodiments of the present disclosure, as shown in fig. 7, step S300 may include:
step S310, determining whether the foreground area is a target object image according to the edge of the foreground area;
step S320, when the foreground area is the target object image, judging whether the target object contained in the foreground area is in an agglomeration state;
step S331, when the target objects included in the foreground area are in an aggregation state, separating the plurality of target objects in the aggregation state according to a preset algorithm, and determining a target object parameter of each target object in the plurality of target objects;
step S332, when the target objects included in the foreground region are not in the aggregation state, determining target object parameters of the target objects.
In some embodiments, whether a foreground region is a target object image, and whether it contains a single target object or an agglomeration of multiple target objects, may be determined based on the shape and size of the foreground region.
For example, determining whether the foreground region is the target object image according to the edge of the foreground region may include:
determining an envelope rectangle which can contain the foreground region and has the minimum size according to the edge of the foreground region;
comparing the width of the envelope rectangle with a first preset threshold;
when the width of the envelope rectangle is larger than or equal to a first preset threshold value, determining that the foreground area is a target object image;
and when the width of the envelope rectangle is smaller than a first preset threshold value, determining that the foreground area is not the target object image.
It should be noted that the shorter side of the enveloping rectangle is referred to herein as its width and the longer side is referred to herein as its length. In a specific example, for example, when the target object is a living cell, the first preset threshold may be set to be slightly smaller than or equal to the diameter of the living cell, and then, if the width of the envelope rectangle is smaller than the diameter of the living cell, it may be determined that the foreground region may not be the target object image, otherwise, the foreground region may be considered to correspond to the target object image.
Similarly, when the foreground region is the target object image, it may also be determined whether the target object contained in the foreground region is in an aggregation state according to the size of the envelope rectangle, that is, step S320 may include:
respectively comparing the width of the envelope rectangle with a second preset threshold value and comparing the length of the envelope rectangle with the second preset threshold value;
when the width and the length of the envelope rectangle are both smaller than or equal to a second preset threshold value, determining that the target objects contained in the foreground area are not in an agglomeration state;
when at least one of the width and the length of the envelope rectangle is larger than a second preset threshold value, determining that the target objects contained in the foreground area are in an agglomeration state;
wherein the second predetermined threshold is greater than or equal to the first predetermined threshold, for example, the second predetermined threshold may be greater than or equal to the diameter of a single living cell.
In a specific example, if at least one of the width and the length of the envelope rectangle is greater than the diameter of a single live cell, it may be determined that two or more live cells are present in the foreground region; if the width and length of the enveloping rectangle are slightly less than or equal to the diameter of a single living cell, then the foreground region can be considered to contain only a single living cell.
In some embodiments, when the target objects included in the foreground region are in the aggregation state, the target objects in the aggregation state may be separated according to a preset algorithm, so as to determine the target object parameters of each target object respectively. These algorithms may be, for example, neural networks and machine learning algorithms such as Hough circle recognition algorithms, watershed algorithms, hot detection (hot detect) algorithms, support Vector Machine (SVM) algorithms, and u-net algorithms.
In analyzing the target object image, the target object parameter may include at least one of a target object count, a target object size, and a target object position.
Specifically, in the process of determining the target object count, the number of target objects included in each foreground region in the image to be processed may be accumulated, thereby obtaining the total number of target objects in the image to be processed.
When determining the size and/or position of the target object, a circle corresponding to each target object may be determined based on the hough circle recognition algorithm, and a diameter of the circle may be determined as the size of the target object, and a center position of the circle may be determined as the position of the target object.
It is to be noted that in some cases, even if only a part of the target object is in the image to be processed (for example, only half of the target object is in the edge region of the image to be processed), the target object size and the target object position may be derived from the foreground region based on the hough circle recognition algorithm.
In addition, when determining the position of the target object from the position of the center of the hough circle, attention needs to be paid to distinguishing the center position of the target object itself from the center position of the aggregation of a plurality of target objects. For example, the error can be avoided by applying a constraint to the center position of the hough circle so that the circle must fall in the initially determined foreground region.
In addition, when determining the size of the target object, the maximum distance between two darkest points of the foreground region may be found in a plurality of directions, so as to obtain the diameters of the foreground region in each direction, and the average value of the diameters in the plurality of directions may be calculated, so as to obtain the size of the target object.
In some embodiments of the present disclosure, the image processing method may further include:
after determining the target object parameters for all foreground regions in the image to be processed, the target object parameters are displayed in the form of at least one of an annotation graph and an annotation list.
Specifically, the determined target object position and target object size may be displayed in an annotation map to intuitively display the analysis result of the image to be processed. Alternatively, the determined target object parameters may be displayed in a list, or the determined target object parameters may be further summarized in the list, so as to facilitate analysis of the sample by the user.
The present disclosure also proposes an image processing apparatus, as shown in fig. 8, the image processing apparatus 800 may comprise a processor 810 and a memory 820, the memory 820 having stored thereon instructions that, when executed by the processor 810, may implement the steps in the image processing method as described above.
Among other things, processor 810 may perform various actions and processes in accordance with instructions stored in memory 820. In particular, processor 810 may be an integrated circuit chip having signal processing capabilities. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, either X86 architecture or ARM architecture or the like.
The memory 820 stores executable instructions that, when executed by the processor 810, perform the image processing methods described above. The memory 820 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link Dynamic Random Access Memory (SLDRAM), and direct memory bus random access memory (DR RAM). It should be noted that the memories of the methods described herein are intended to comprise, without being limited to, these and any other suitable types of memory.
The present disclosure also proposes an optical detection apparatus, which may comprise a light source apparatus, a sample stage, an imaging apparatus and an image processing apparatus as described above. The optical detection device can be used for online and in-situ detection of biological particles such as cells without dyeing or the like of a transparent target object.
The present disclosure also proposes a computer-readable storage medium having stored thereon instructions which, when executed, implement the steps of the image processing method as described above. Similarly, computer-readable storage media in embodiments of the disclosure may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. It should be noted that the computer-readable storage media described herein are intended to comprise, without being limited to, these and any other suitable types of memory.
The present disclosure also proposes a computer program product which may comprise instructions which, when executed by a processor, may implement the steps of the image processing method as described above.
The instructions may be any set of instructions to be executed directly by one or more processors, such as machine code, or indirectly, such as scripts. The terms "instructions," "applications," "processes," "steps," and "programs" herein may be used interchangeably. The instructions may be stored in an object code format for direct processing by one or more processors, or in any other computer language, including scripts or collections of independent source code modules that are interpreted or compiled in advance, as needed. The instructions may include instructions that cause, for example, one or more processors to function as the neural networks herein. The functions, methods, and routines of the instructions are explained in more detail elsewhere herein.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
The terms "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.
As used herein, the word "exemplary" means "serving as an example, instance, or illustration," and not as a "model" that is to be reproduced exactly. Any implementation exemplarily described herein is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, the disclosure is not limited by any expressed or implied theory presented in the preceding technical field, background, brief summary or the detailed description.
As used herein, the term "substantially" is intended to encompass any minor variations due to design or manufacturing imperfections, tolerances of the devices or components, environmental influences and/or other factors. The word "substantially" also allows for differences from a perfect or ideal situation due to parasitic effects, noise, and other practical considerations that may exist in a practical implementation.
The above description may indicate elements or nodes or features being "connected" or "coupled" together. As used herein, unless expressly stated otherwise, "connected" means that one element/node/feature is directly connected to (or directly communicates with) another element/node/feature, either electrically, mechanically, logically, or otherwise. Similarly, unless expressly stated otherwise, "coupled" means that one element/node/feature may be mechanically, electrically, logically, or otherwise joined to another element/node/feature in a direct or indirect manner to allow for interaction, even though the two features may not be directly connected. That is, "coupled" is intended to include both direct and indirect joining of elements or other features, including connections utilizing one or more intermediate elements.
It will be further understood that the terms "comprises/comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Those skilled in the art will appreciate that the boundaries between the above described operations merely illustrative. Multiple operations may be combined into a single operation, single operations may be distributed in additional operations, and operations may be performed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments. However, other modifications, variations, and alternatives are also possible. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. The various embodiments disclosed herein may be combined in any combination without departing from the spirit and scope of the present disclosure. It will also be appreciated by those skilled in the art that various modifications may be made to the embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. An image processing method, characterized in that the image processing method comprises:
acquiring an image to be processed, wherein the image to be processed comprises a foreground area and a background area with different brightness;
carrying out binarization processing on the image to be processed, and determining one or more foreground areas in the image to be processed according to a binarization processing result; and
and determining whether the foreground area is a target object image or not for each foreground area in the one or more foreground areas, and determining a target object parameter according to the foreground area under the condition that the foreground area is the target object image.
2. The image processing method according to claim 1, wherein the image to be processed is obtained by photographing a sample capable of containing the target object under a combination of bright field illumination and dark field illumination.
3. The image processing method according to claim 2, wherein the target object in the sample has a different refractive index from other portions in the sample, the target object is transparent, and the target object is not dyed.
4. The image processing method according to claim 1, wherein performing binarization processing on the image to be processed, and determining one or more foreground regions in the image to be processed according to a result of the binarization processing comprises:
respectively comparing the gray value of each pixel in the image to be processed with a preset gray threshold value;
determining whether the pixel belongs to a first pixel of a foreground region or a second pixel of a background region according to a comparison result of the gray value of the pixel and the preset gray threshold value;
dividing continuously distributed first pixels into the same foreground region;
wherein different foreground regions in the image to be processed are separated by background regions.
5. The image processing method according to claim 4, wherein performing binarization processing on the image to be processed and determining one or more foreground regions in the image to be processed according to a result of the binarization processing further comprises:
and determining a preset gray threshold corresponding to the image to be processed according to the brightness distribution of the image to be processed.
6. The method according to claim 4, wherein determining whether the pixel is a first pixel belonging to a foreground region or a second pixel belonging to a background region according to a comparison result of a gray value of the pixel with the preset gray threshold value comprises:
when the gray value of the pixel is larger than the preset gray threshold value, determining that the pixel is a first pixel belonging to a foreground area;
and when the gray value of the pixel is less than or equal to the preset gray threshold value, determining that the pixel is a second pixel belonging to a background area.
7. An image processing apparatus, characterized in that it comprises a processor and a memory, said memory having stored thereon instructions which, when executed by said processor, carry out the steps of the image processing method according to any one of claims 1 to 6.
8. An optical inspection device, the optical inspection device comprising:
a light source apparatus, the light source apparatus comprising:
an illumination light source configured to generate illumination light; and
the diaphragm, the diaphragm is located illumination source's outgoing light path, the diaphragm includes:
a light-shielding screen configured to shield a portion of the illumination light;
a first light-transmitting portion opened on the light-shielding screen and covering a center of the diaphragm, the first light-transmitting portion being configured to transmit a part of the illumination light to form bright field illumination on the sample; and
a second light-transmitting portion opened on the shading screen and located at a periphery of the first light-transmitting portion, the second light-transmitting portion being configured to transmit a portion of the illumination light to form dark field illumination of the sample;
a sample stage configured to carry the sample;
an imaging device configured to image the sample under illumination by the light source device to produce a to-be-processed image, and the imaging device includes an objective lens; and
the image processing apparatus according to claim 7.
9. A computer-readable storage medium having stored thereon instructions which, when executed, implement the steps of the image processing method according to any one of claims 1 to 6.
10. A computer program product, characterized in that it comprises instructions which, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 6.
CN202110547043.1A 2021-05-19 2021-05-19 Image processing method, image processing apparatus, and optical detection device Pending CN115393371A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110547043.1A CN115393371A (en) 2021-05-19 2021-05-19 Image processing method, image processing apparatus, and optical detection device
PCT/CN2021/099632 WO2022241879A1 (en) 2021-05-19 2021-06-11 Image processing method, image processing device, and optical detection apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110547043.1A CN115393371A (en) 2021-05-19 2021-05-19 Image processing method, image processing apparatus, and optical detection device

Publications (1)

Publication Number Publication Date
CN115393371A true CN115393371A (en) 2022-11-25

Family

ID=84114580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110547043.1A Pending CN115393371A (en) 2021-05-19 2021-05-19 Image processing method, image processing apparatus, and optical detection device

Country Status (2)

Country Link
CN (1) CN115393371A (en)
WO (1) WO2022241879A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152088B (en) * 2023-09-01 2024-02-06 北京奥乘智能技术有限公司 Method, device, equipment and storage medium for detecting seal of medicine package

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4925036B2 (en) * 2006-05-24 2012-04-25 独立行政法人科学技術振興機構 Dark field microscope and adjustment method thereof
CN102682305B (en) * 2012-04-25 2014-07-02 深圳市迈科龙医疗设备有限公司 Automatic screening system and automatic screening method using thin-prep cytology test
CN105403988A (en) * 2015-09-29 2016-03-16 南京理工大学 Programmable aperture microscope system based on LCD liquid crystal panel and multi-mode imaging method thereof
CN109461186A (en) * 2018-10-15 2019-03-12 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN110070547A (en) * 2019-04-18 2019-07-30 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2022241879A1 (en) 2022-11-24

Similar Documents

Publication Publication Date Title
US11457138B2 (en) Method and device for image processing, method for training object detection model
TWI805857B (en) System and method for characterizing a specimen
US9778206B2 (en) Defect inspection device and defect inspection method
US9305343B2 (en) Observation device and observation method
JP6617143B2 (en) Defect detection system and method using structure information
JP2008539763A (en) Image analysis method based on chromogen separation
US20190204292A1 (en) Image Processing Device, Image Processing Method, and Recording Medium
CN115032196B (en) Full-scribing high-flux color pathological imaging analysis instrument and method
US20220222822A1 (en) Microscopy System and Method for Evaluating Image Processing Results
US11769236B2 (en) Microscopy system and method for generating an HDR image
TW202041850A (en) Image noise reduction using stacked denoising auto-encoder
CN108037142A (en) Mask plate optical defect detection method based on image intensity value
KR102538268B1 (en) Detection of defects in the logic area on the wafer
CN115393371A (en) Image processing method, image processing apparatus, and optical detection device
CN114723671B (en) Image preprocessing method and device, computer equipment and storage medium
CN115047610A (en) Chromosome karyotype analysis device and method for automatically fitting microscopic focusing plane
Choi et al. Deep learning based defect inspection using the intersection over minimum between search and abnormal regions
CN115176289A (en) Cell line development image characterization using convolutional neural networks
CN116596899A (en) Method, device, terminal and medium for identifying circulating tumor cells based on fluorescence image
IL233523A (en) System and method for quantifying reflection e.g. when analyzing laminated documents
CN114694143B (en) Cell image recognition method and device based on optical means
CA3233549A1 (en) Systems and methods for image processing
JP2020170702A (en) Automatic generation of labeled image from primary microscope detector using image from secondary microscope detector
CN112771568A (en) Infrared image processing method, device, movable platform and computer readable medium
CN115830431B (en) Neural network image preprocessing method based on light intensity analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination