CN116416188A - Image processing method, device, flat panel detector, equipment and storage medium - Google Patents

Image processing method, device, flat panel detector, equipment and storage medium Download PDF

Info

Publication number
CN116416188A
CN116416188A CN202111662769.6A CN202111662769A CN116416188A CN 116416188 A CN116416188 A CN 116416188A CN 202111662769 A CN202111662769 A CN 202111662769A CN 116416188 A CN116416188 A CN 116416188A
Authority
CN
China
Prior art keywords
pixel
image
flat panel
panel detector
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111662769.6A
Other languages
Chinese (zh)
Inventor
赵镇乾
徐帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Optoelectronics Technology Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202111662769.6A priority Critical patent/CN116416188A/en
Publication of CN116416188A publication Critical patent/CN116416188A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P70/00Climate change mitigation technologies in the production process for final industrial or consumer products
    • Y02P70/50Manufacturing or production processes characterised by the final manufactured product

Abstract

The invention discloses an image processing method, an image processing device, a flat panel detector, flat panel detector equipment and a storage medium. The method of one embodiment is applied to a flat panel detector, and the flat panel detector comprises a second pixel for lighting, and a first pixel and a third pixel which are respectively positioned at two sides of the second pixel in the column direction, wherein the first pixel and the third pixel do not light. The method comprises the following steps: determining a first reference value according to a first reference image, wherein the first reference image is generated by a flat panel detector based on a first scanning time; acquiring a second reference image generated in a second scanning time when the flat panel detector is in an automatic exposure detection mode, and determining a second reference value according to the second reference image; and compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value, wherein the image to be processed comprises a dividing line extending along the row direction. The method provided by the embodiment of the invention can eliminate the dividing line in the image to be processed.

Description

Image processing method, device, flat panel detector, equipment and storage medium
Technical Field
The present invention relates to the field of image processing. And more particularly, to an image processing method, apparatus, flat panel detector, device, and storage medium.
Background
Automatic Exposure Detection (AED) technology is widely used in X-ray detection systems, especially flat panel detectors (Flat Panel Detector, FPD). The FPD converts the X-ray into visible light through a scintillator, the photodiode converts the optical signal into an electric signal, finally, a matrix formed by the thin film transistors reads the electric signal to the signal processing module in sequence, digital signal conversion, compensation and correction are carried out, and finally, a gray level diagram is output. The AED module detects whether exposure exists and controls the FPD to enter a signal reading and outputting state; the working principle is that a certain threshold value is set for the signal quantity, when the accumulated signal quantity of the FPD exceeds the preset threshold value, the X-rays are judged to be exposed, then the accumulation of the electric charge quantity is started, and after the preset integration time is finished, the Gate signal controls the thin film transistor to be turned on line by line, so that the signal output is finished.
Currently, the problems with existing AED technology are: the flat panel detector can form an obvious boundary on a final image obtained by utilizing an automatic exposure detection technology, so that the information of the image is inaccurate, and the analysis and judgment of a terminal hospital doctor on a focus of a patient are affected.
Disclosure of Invention
The present invention is directed to an image processing method, apparatus, flat panel detector, device, and storage medium that solve at least one of the problems of the prior art.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the first aspect of the present invention provides an image processing method, which is applied to a flat panel detector, wherein the flat panel detector includes a second pixel for lighting, and a first pixel and a third pixel respectively located at two sides of the second pixel in the column direction, and the first pixel and the third pixel do not light;
the method comprises the following steps:
determining a first reference value from a first reference image, the first reference image being generated by the flat panel detector based on a first scan time;
acquiring a second reference image generated in a second scanning time when the flat panel detector is in an automatic exposure detection mode, and determining a second reference value according to the second reference image;
and compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value, wherein the image to be processed comprises a dividing line extending along the row direction.
Further, the determining the first reference value according to the first reference image further includes:
determining a first average gray value of the first pixel and a second average gray value of the second pixel in the first reference image;
And obtaining the first reference value according to the first average gray value and the second average gray value.
Further, in the automatic exposure detection mode of the flat panel detector, the second reference image forms the boundary line at the position of the second pixel row when the exposure occurs;
the acquiring the second reference image generated in the second scanning time by the flat panel detector in the automatic exposure detection mode, and determining a second reference value according to the second reference image, further includes:
taking a dividing line of the second reference image as a subarea line to obtain a third average gray value of a first integration area of the subarea line towards the first pixel and a fourth average gray value of a second integration area of the subarea line towards the third pixel, wherein the first integration area is formed from a first pixel positioned in the first line to a second pixel corresponding to the subarea line in the column direction, the second integration area is formed from a third pixel positioned in the last line to a second sub-pixel adjacent to the subarea line in the column direction, and the first integration area and the second integration area are not overlapped;
and determining the second reference value according to the third average gray level value and the fourth average gray level value.
Further, the compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value further includes:
determining the gray compensation value according to the first reference value and the second reference value;
and compensating the image to be processed according to the gray compensation value.
Further, in the automatic exposure detection mode of the flat panel detector, the to-be-processed image of the flat panel detector forms the dividing line at the position of the second pixel row when exposure occurs;
the compensating the image to be processed according to the gray compensation value further comprises:
taking the dividing line of the image to be processed as a subarea row, determining a first integral area and a second integral area of the image to be processed, and determining a second pixel to be compensated in the first integral area or the second integral area of the image to be processed;
and compensating the original gray value of the second pixel to be compensated according to the gray compensation value, wherein the original gray value is the gray value of the second pixel to be compensated for the preset integration time after exposure.
Further, before the determining the first reference value according to the first reference image, the method further includes:
And performing correction processing on the flat panel detector, wherein the correction processing comprises gain correction and dark field correction.
A second aspect of the present invention provides an image processing apparatus for performing the method of the first aspect of the present invention, the apparatus comprising;
the first reference value determining module is used for generating a first reference image based on a first scanning time and determining a first reference value according to the first reference image;
a second reference value determining module for acquiring a second reference image generated in a second scanning time when the flat panel detector is in the automatic exposure detection mode, and determining a second reference value according to the second reference image
And the image to be processed compensation module is used for compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value, wherein the image to be processed comprises a dividing line extending along the row direction.
A third aspect of the present invention provides a flat panel detector comprising the image processing apparatus of the second aspect of the present invention.
Further, the flat panel detector further includes:
a second pixel for lighting;
the first pixels and the third pixels are respectively positioned at two sides of the second pixel column direction, and the first pixels and the third pixels do not collect light;
Wherein the first pixel and the third pixel each include: a photodiode and a driving thin film transistor driving the photodiode;
the first pixels are at least one row, and the third pixels are at least one row.
Further, the first pixel and the third pixel further include: and the light shielding layers cover the photodiodes and the driving thin film transistors, and projections of the light shielding layers cover the photodiodes and the driving thin film transistors.
A fourth aspect of the invention provides a flat panel detector comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of the first aspect of the invention when executing the program.
A fifth aspect of the invention provides a computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements a method according to the first aspect of the invention.
The beneficial effects of the invention are as follows:
according to the technical scheme provided by the embodiment of the invention, the pixel structure of the flat panel detector is designed, the gray value of the photodiode under the influence of leakage current only in the scanning process is obtained through the first reference image and is used as the first reference value, the variation of the second reference image generated on the basis of the first reference value in the AED mode is used as the second reference value, the image to be processed is compensated according to the first reference value and the second reference value, the boundary line in the image to be processed can be eliminated, the method of the embodiment is not limited by environmental change, the obtained image information has higher precision, and the detection accuracy is improved.
Drawings
The following describes the embodiments of the present invention in further detail with reference to the drawings.
Figure 1a shows a scanning timing diagram of a flat panel detector in a non-AED detection mode;
figure 1b shows a scanning timing diagram of a flat panel detector in AED detection mode;
figure 2 shows a schematic diagram of an image generated by a flat panel detector with demarcation lines in an AED detection mode;
FIG. 3 shows a flow diagram of an image processing method according to one embodiment of the invention;
FIG. 4 shows a schematic diagram of a pixel arrangement of a flat panel detector according to an alternative embodiment of the present invention;
FIG. 5 is a schematic flow chart of step S2 in FIG. 3 according to an alternative embodiment of the present invention;
FIG. 6 is a schematic flow chart of step S3 in FIG. 3 in accordance with an alternative embodiment of the present invention;
FIG. 7 is a schematic flow chart of step S4 in FIG. 3 in accordance with an alternative embodiment of the present invention;
FIG. 8 is a schematic flow chart of step S42 in FIG. 7 in accordance with an alternative embodiment of the present invention;
fig. 9 is a schematic diagram showing a frame structure of an image processing apparatus according to another embodiment of the present invention;
FIG. 10 is a schematic view showing the layer structure of a first pixel or a third pixel according to an alternative embodiment of the present invention;
fig. 11 shows a schematic structural diagram of a computer device according to another embodiment of the present invention.
Detailed Description
It is further noted that in the description of the present invention, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the prior art, as shown in fig. 1a, when the flat panel detector does not perform image detection by using the AED detection technique, the Gate signal completes scanning from line 1 to 3072 in one scanning period in the image capturing mode, and the time is the first scanning time.
The process of using AED detection technology for flat panel detectors is: as shown in fig. 1b, when the FPD does not output signals, i.e., in the reset stage before the image capturing mode, the FPD is continuously scanned by the Gate signal to remove the residual charges and reset. When the exposure does not occur, the Gate signal starts scanning from line 1 at a fixed timing. When the X-ray is exposed, the Gate signal will typically scan a line that is not fixed, the AED module determines that the exposure has occurred, the Gate reset operation is terminated on the current line, e.g., the nth line shown in fig. 1b, and the Gate signal is no longer passed down, because continuing down passes resets the acquired optoelectronic signal and is lost.
As shown in fig. 1b, when the AED module determines that exposure has occurred (vertical gray bar), the Gate reset action is terminated on the nth row currently scanned, after waiting for a preset integration time, for example, after 1 second, the flat panel detector enters a pattern acquisition mode, reads all Gate signals from row 1 to row 3072, and outputs the signals acquired by the photodiodes row by row, thereby forming a detection image as shown in fig. 2, on which a distinct demarcation line is formed.
Although the prior art can alleviate the conspicuity of the parting line by pre-correction of the parting line template, the parting line cannot be removed. The inventor has made extensive experiments and studies to propose that the main reason why the parting line still exists through the parting line template scheme is that: the environmental state during template construction can not be restored in actual use, mainly comprises temperature and humidity and the like, and the temperature and humidity change can greatly influence the characteristics of the semiconductor device, so that the actual image boundary state is far beyond the coverage range of the boundary pre-correction template.
The inventors further studied the cause of the occurrence of the boundary line and proposed: the photodiode continuously generates leakage current to affect the gray scale of an image under the working time, in the process of scanning the Gate signal line by line in the image capturing mode, the Gate termination line (for example, the nth line in fig. 1 b) interrupted by the Gate signal at the time of exposure is taken as a boundary, the first integration time of leakage current accumulation before the Gate termination line is different from the second time of leakage current accumulation after the Gate termination line, for example, the 1 st line to the nth line enter the image capturing mode after completing reset through the Gate signal scanning, but the (n+1) th line to the 3072 th line do not complete reset and enter the image capturing mode, so that the difference exists between the leakage current accumulated in the integration time of the 1 st line to the nth line and the leakage current accumulated in the integration time of the (n+1) th line to the 3072 th line, and the difference value makes the detected image finally appear as a distinct boundary formed at the position of the nth line.
Accordingly, based on the above-described studies and findings, the present invention proposes an image processing method, apparatus, flat panel detector, computer device, and storage medium to solve the above-described problems.
As shown in fig. 3, a first embodiment of the present invention proposes an image processing method, which is applied to the flat panel detector shown in fig. 4, the flat panel detector includes a second pixel 52 for lighting, and a first pixel 51 and a third pixel 53 respectively located at two sides of the second pixel 52 in the column direction, the first pixel 51 and the third pixel 53 do not light, that is, in this embodiment, the first pixel and the third pixel do not perform photoelectric conversion, and the second pixel is used for photoelectric conversion. Illustratively, the first pixel 51 and the third pixel 53 of the present embodiment may be a plurality of rows, not limited to the one shown in fig. 4. As shown in fig. 4, the first pixel 51, the second pixel 52, and the third pixel 53 are all connected to the same Gate signal line, the same bias line, and the same signal output line, so that the functions and connection relations of the first pixel and the third pixel except for the lighting function are consistent with those of the second pixel.
When the flat panel detector is applied to the pixel arrangement structure, the Gate signal starts scanning from the first pixel of the first row, and the third pixel in the last row is scanned for one scanning period, namely one working integration time.
As shown in fig. 3, the method of the embodiment of the present invention includes:
s2, determining a first reference value according to a first reference image, wherein the first reference image is generated by the flat panel detector based on a first scanning time.
S3, acquiring a second reference image generated in a second scanning time when the flat panel detector is in an automatic exposure detection mode, and determining a second reference value according to the second reference image.
And S4, compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value, wherein the image to be processed comprises a dividing line extending along the row direction.
According to the method, the pixel structure of the flat panel detector is designed, the gray value which is only under the influence of leakage current in the scanning process is obtained through the first reference image and is used as the first reference value, the change amount of the second reference image generated on the basis of the first reference value in the AED mode is determined to be used as the second reference value, the image to be processed is compensated according to the first reference value and the second reference value, the boundary line in the image to be processed can be eliminated, the method is not limited by environmental change, the obtained image information is high in accuracy, and the detection accuracy is improved.
A process of an embodiment of the present invention will now be exemplarily described.
S2, determining a first reference value according to a first reference image, wherein the first reference image is generated by the flat panel detector based on a first scanning time.
In this embodiment, taking the pixel arrangement structure shown in fig. 4 as an example of the application of the flat panel detector, the flat panel detector scans line by line from the first pixel 51 located in the first line to the third pixel 53 located in the last line, thereby generating the first reference image. In this embodiment, the functions of the first pixel 51 and the third pixel 53 are the same as those of the second pixel 52 in the normal lighting area except that the first pixel 51 and the third pixel 53 do not perform photoelectric conversion, and since the photodiode corresponding to the first pixel does not perform photoelectric conversion, only the leakage current is generated under the reverse bias voltage during the integration time, so as to generate the first reference value affecting the gray level of the image to be processed, and the first reference value is consistent with the gray level of the second pixel in the dark state (the second pixel that normally lights in the dark state does not need photoelectric conversion as well) under the same integration time, so the first reference value in this embodiment can be used as the reference value.
The second pixel 52 is a normal lighting area, and the photodiode corresponding to the second pixel not only generates leakage current, but also needs to perform photoelectric conversion, and this design ensures that the first pixel and the third pixel can be charged and discharged by the coupling capacitor, so that the effect of the leakage current on the first pixel 51, the third pixel 53 and the second pixel 52 is the same, and by the gray-scale difference value of the first pixel and the third pixel in the first reference image, the effect of the leakage current on the gray-scale value of the image in different scanning stages when exposure is not performed, for example, the stage of scanning the first pixel and the stage of scanning the third pixel can be determined.
In an alternative embodiment, as shown in fig. 5, the step S2 further includes:
s21, determining a first average gray value of the first pixel and a second average gray value of the third pixel in the first reference image.
As shown in fig. 1b, the first scanning time of the present embodiment is an integration time from the first pixel of the 1 st row to the third pixel of the 3072 st row in the image capturing mode, the captured image is used as a first reference image, and a first average gray value of the first pixel in the first reference image is calculated.
In one embodiment, the number of rows of the first pixels in the column direction of the present embodiment is at least one row, that is, the embodiment of the present invention does not limit the number of rows of the first pixels, uses the overall size of the flat panel detector as a design criterion, and can set the first pixels in a plurality of rows on the basis of ensuring the normal function of the flat panel detector, so as to better eliminate the uniformity difference effect caused by the production process by increasing the data amount, thereby improving the accuracy of the first average gray value top_avg of the first pixels. For example, as shown in fig. 4, the first pixels are disposed above the second pixels, and the number of the first pixels is 1 row, and the gray values of the first pixels of the row are averaged, thereby obtaining a first average gray value top_avg.
Similarly, the number of the third pixel rows is not limited in this embodiment, and when the third pixel rows are multiple, the second average gray value bottom_avg of the third pixel in the image is obtained by calculating the gray value corresponding to the row where the third pixel is located in the first reference image. For example, as shown in fig. 4, the third pixels are disposed below the second pixels, and the number of the third pixels is 1 row, and the gray values of the third pixels of the row are averaged, thereby obtaining the second average gray value bottom_avg.
For example, in order to further improve the information accuracy of the first reference image in this step, the embodiment of the present invention may select a plurality of first reference images, and after calculating a first average gray value and a second average gray value of each first reference image, average the plurality of first average gray values and the plurality of second average gray values, so as to obtain a first average gray value and a second average gray value with more accurate accuracy.
This step may be performed in the micro-processing unit of the FPD or in the first reference value determination module in the image processing apparatus, for example.
S22, obtaining the first reference value according to the first average gray value and the second average gray value.
In a specific example, the relationship among the first reference value reference, the first average gray value top_avg of the first pixel, and the third average gray value bottom_avg of the third pixel is:
Top_avg–Bottom_avg=reference,
that is, since the difference between the first average gradation value corresponding to the first pixel and the second average gradation value corresponding to the third pixel in the first reference image formed at the first scanning time is affected only by the leakage current of the respective photodiodes, the first reference value can be used as a reference value, and on the basis of this value, the leakage current changes at different time periods before and after interruption of the Gate reset signal due to exposure can be known, and the gradation value changes in the upper and lower areas of the boundary line of the image to be processed due to the leakage current changes can be known.
S3, acquiring a second reference image generated in a second scanning time when the flat panel detector is in an automatic exposure detection mode, and determining a second reference value according to the second reference image.
Illustratively, the second scan time and the first scan time are the same in value, e.g., the first scan time and the second scan time are each the integration time of scanning lines 1 through 3072 shown in fig. 1 b. However, in the automatic exposure detection mode, since the exposure results in that the second integration time of the n+1th row to 3072 th row is not reset by the Gate signal, and the first integration time of the 1 st row to the n th row is reset before the exposure, the currents accumulated in the first integration time and the second integration time are different, further resulting in that the generated second reference image of the flat panel detector and the image to be processed for detection generate a boundary line as shown in fig. 2 at a position corresponding to the second pixel row when the exposure occurs.
In an alternative embodiment, as shown in fig. 6, the step S3 further includes:
s31, partitioning the second reference image by taking the boundary of the second reference image as a partition line, and obtaining a third average gray value of a first integral area of the first pixel of the partition line and a fourth average gray value of a second integral area of the third pixel of the partition line.
In this embodiment, as shown in fig. 1b, in the AED mode, the Gate signal is exposed in the process of charge-clearing scanning from top to bottom, gate termination rows are generated during exposure, and the AED mode is entered into the image capturing mode after waiting for a preset integration time, and scanning records of all rows are read in the image capturing mode. The exposure affects the accumulated leakage current before the pixel line above the Gate end line and the pixel line below the Gate end line enter the image capturing mode, and the effect lasts for the scanning process of 1-3072 lines in the image capturing mode, and the second scanning time in the image capturing mode generates two integration time intervals, namely the first integration time from the 1 st line above the Gate end line to the Gate end line (n line) is 1t, and the second integration time from the next line (n+1 line) of the Gate end line to the last line 3072 line is 2t.
In the image capturing mode, the flat panel detector reads the scanning signals of all the rows to generate a second reference image, and as shown in fig. 2, the position where the Gate termination row is located is displayed as a boundary on the second reference image. In this embodiment, the second reference image is divided into two corresponding regions by the second pixel row partition row where the flat panel detector is exposed (i.e., gate end row), i.e., by the row partition row where the boundary of the second reference image is located: a first integration region and a second integration region.
For the second reference image, the first integration area of the second reference image in this embodiment is formed from the first pixel located in the first row to the second pixel corresponding to the partition row, that is, the first row above the Gate termination row to the Gate termination row form the first integration area, which may also be expressed as that the area from the line where the dividing line is located up to the first row image is the first integration area, where the first integration time of the first integration area is 1t, and further, the third average gray value top_1t.avg of the first integration time 1t is the gray value of the second pixel to the first pixel and the first integration area under the influence of the leakage current before the exposure occurs.
The second integration area is an area formed from a third pixel located in the last row to a second sub-pixel adjacent to the partition row in the column direction, that is, a second integration area is formed from a row below the Gate termination row to a third pixel in the last row, and the area from the next row of the line where the dividing line is located down to the image in the last row is also expressed as a second integration area, and the second integration time of the second integration area is 2t, further, the fourth average gray value bottom_2t.avg of the second integration time 2t is the gray value of the second pixel and the third pixel of the second integration area under the influence of leakage current after exposure.
Since the states of the Gate signals before and after exposure are different, in the second scan time in the AED mode, the current accumulated in the first integration time and the current accumulated in the second integration time are different, which results in that the leakage current in the different integration times has different gray scale effects on the second reference image, the first integration area and the second integration area do not overlap, so that a dividing line is formed in the exposure mode, the step is to terminate the line division according to the Gate at the time of exposure, and obtain the third gray scale value of the first integration area after division and the fourth gray scale value of the second integration area, and further calculate, on the basis of step S2, the gray scale difference Diff after the average of the first pixel in the first integration area and the third pixel in the second integration area is obtained again as the second reference value, so as to prepare for the subsequent calculation of the compensation value offset.
S32, determining the second reference value according to the third average gray value and the fourth average gray value.
In this embodiment, the relationship between the third average gray value top_1t.avg, the fourth gray value bottom_2t.avg and the second reference value Diff is:
Top_1t.avg–Bottom_2t.avg=Diff。
based on the above discussion, step S2 obtains the difference in gray value between the first pixel and the third pixel as a first reference value, and step S3 obtains the average difference in gray value between the first pixel in the first integration area and the third pixel in the second integration area as a second reference value, based on the two reference values, to compensate the image to be processed.
Illustratively, step S3 may be performed in the micro processing unit of the FPD or may be performed in the second reference value determining module in the image processing apparatus.
And S4, compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value.
In an alternative embodiment, as shown in fig. 7, the step S4 further includes:
s41, determining the gray compensation value according to the first reference value and the second reference value.
Based on the above steps, step S2 obtains the gray value difference between the first pixel and the second pixel as the first reference value, where the relationship among the first reference value reference, the first average gray value top_avg of the first pixel, and the third average gray value bottom_avg of the third pixel is:
Top_avg–Bottom_avg=reference,
In step S3, the gray value difference between the first pixel of the first integration region and the third pixel of the second integration region after the partitioning is performed with the Gate termination line partition when the exposure occurs is obtained as the second reference value, where the relationship between the third average gray value top_1t.avg, the fourth gray value bottom_2t.avg and the second reference value Diff is:
Top_1t.avg–Bottom_2t.avg=Diff。
further, the relation of the gray compensation value offset obtained according to the first reference value reference and the second reference value Diff is:
Diff–reference=offset。
that is, the first reference value indicates the influence of the first pixel and the third pixel on the gray value under the accumulation of the leakage current charge when there is no exposure, the second reference value indicates the influence of the accumulation of the leakage current charge on the gray value of the first pixel and the third pixel in the second integration area in the AED mode, and the gray compensation value formed by the difference between the first reference value and the second reference value is the gray compensation value offset of 1t and 2t in the integration time of the first integration area up to the Gate termination line and the second integration area down to the Gate termination line after the exposure, respectively.
S42, compensating the image to be processed according to the gray compensation value.
In the automatic exposure detection mode of the flat panel detector, the image to be processed of the flat panel detector is generated with the dividing line at the position of the second pixel row when the exposure occurs, that is, the second reference image and the image to be processed of the present embodiment are both formed with the dividing line,
Illustratively, the integration time of the flat panel detector of the present embodiment for generating the image to be processed is the same as the first scanning time and the second scanning time, thereby ensuring the compensation accuracy.
In an alternative embodiment, as shown in fig. 8, the step S42 further includes:
s421, taking the dividing line of the image to be processed as a subarea line, determining a first integral area and a second integral area of the image to be processed, and determining a second pixel to be compensated in the first integral area or the second integral area of the image to be processed.
In this embodiment, for the image to be processed, in the AED mode, the demarcation line is formed at the location of the second pixel row at which the exposure occurs, i.e., the Gate-terminated row. The second reference image is divided into two corresponding areas by the second pixel row at which the flat panel detector is exposed (i.e. Gate end row), i.e. by the row at which the dividing line of the image to be processed is located: a first integration region and a second integration region.
For the image to be processed, the first integration area is formed from the first pixels of the first row to the second pixels of the boundary line, and the first integration area comprises the first pixels which do not light, so that only the second pixels in the first integration area need to be compensated. Similarly, the third pixels from the next row to the last row of the second pixel row where the dividing line is located form a second integration area, and the second integration area includes the third pixels that do not light, so that only the third pixels in the second integration area need to be compensated, and when compensation is performed, one of the two integration areas can be selected for compensation, so that compensation efficiency and compensation precision are improved.
S422, compensating the original gray value of the second pixel to be compensated according to the gray compensation value, wherein the original gray value is the gray value of the second pixel to be compensated for the preset integration time after exposure.
For example, the gray level corresponding to the first integration region is higher, the gray level corresponding to the second integration region is lower, and the embodiment may select to perform negative compensation on the first integration region, that is, subtract the gray compensation value offset on the basis of the original gray value, where in a specific example, the compensation relation is:
Top_1t(x,y)’=Top_1t(x,y)+(-offset),
the original gray value top_1t (x, y) is a gray value of a preset integration time after exposure of a second pixel of the first integration area above the Gate termination line, and is also a gray value of a preset integration time after exposure of a second pixel of the first integration area above the line where the dividing line is located. Top_1t (x, y)' is the compensated gray value of the second pixel located in x rows and y columns. For example, the preset integration time may be 1 second, that is, after exposure for 1 second, the pattern shown in fig. 1b is entered.
In another specific example, the gray level corresponding to the second integration region is lower, and the embodiment may select to perform forward compensation on the second integration region, that is, subtract the gray compensation value offset on the basis of the original gray level value, where in one specific example, the compensation relation is:
Top_2t(x,y)’=Top_2t(x,y)+(offset),
The original gray value top_2t (x, y) is a gray value of a preset integration time after exposure of the second pixel of the second integration area below the Gate termination line, and is also a gray value of a preset integration time waiting after exposure of the second pixel of the second integration area below the line where the dividing line is located. Top_2t (x, y)' is the compensated gray value of the second pixel located in x rows and y columns. For example, the preset integration time may be 1 second, that is, after exposure for 1 second, the pattern shown in fig. 1b is entered.
Step S4 may be performed in the micro-processing unit of the FPD or in a to-be-processed image compensation module in the image processing apparatus, for example.
In an alternative embodiment, as shown in fig. 3, before step S2 "determine the first reference value from the first reference image", the method further comprises:
s1, performing correction processing on the flat panel detector, wherein the correction processing comprises gain correction and dark field correction.
Illustratively, the gain correction of embodiments of the present invention may include obtaining the image to be processed using a denoising template. In yet another example, the dark field correction may use a preset coefficient to perform gray scale correction on the image to be processed, thereby improving the quality of the image to be processed.
According to the method of the embodiment, the pre-correction template does not need to be designed on the image to be processed, the cost for manufacturing the pre-correction template can be reduced, the method of the embodiment is not limited by environmental factors such as temperature and humidity, the change of gray values of upper and lower areas of a dividing line in the image to be processed generated by the flat panel detector can be compared in real time, gray scales of second pixels in a first integrating area and a second integrating area which are above and below the dividing line are compensated, the dividing line in the image to be processed is eliminated, and the image precision of the image to be processed is improved.
In correspondence with the above image processing method, another embodiment of the present invention proposes an image processing apparatus, as shown in fig. 9, including:
the first reference value determining module is used for generating a first reference image based on a first scanning time and determining a first reference value according to the first reference image;
the second reference value determining module is used for acquiring a second reference image generated in a second scanning time when the flat panel detector is in an automatic exposure detection mode and determining a second reference value according to the second reference image;
and the image to be processed compensation module is used for compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value, wherein the image to be processed comprises a dividing line extending along the row direction.
Since the image processing apparatus provided by the embodiment of the present invention corresponds to the image processing method provided by the above-described several embodiments, the foregoing implementation manner is also applicable to the image processing apparatus provided by the present embodiment, and the process performed by the image processing apparatus in this embodiment may refer to the above-described method embodiment, which will not be described in detail in this embodiment. Those skilled in the art should appreciate that the foregoing embodiments and the following advantageous effects are equally applicable to the present embodiment, and therefore, the same parts will not be repeated.
Another embodiment of the present invention proposes a flat panel detector including the image processing apparatus of the above-described embodiment of the present invention.
In another alternative embodiment, the flat panel detector employs the pixel arrangement shown in fig. 4, comprising:
a second pixel for lighting;
the first pixels and the third pixels are respectively positioned at two sides of the second pixel column direction, and the first pixels and the third pixels do not collect light;
the first pixels are at least one row;
the third pixels are at least one row.
In an alternative embodiment, the flat panel detector further comprises: and the light shielding metal layers are respectively formed on the first pixel and the third pixel, and the projections of the light shielding metal layers cover the projections of the first pixel and the third pixel, so that the first pixel and the second pixel are prevented from photoelectric conversion through the arrangement, and the precision in image processing is ensured.
The layer structures of the first pixel and the third pixel of this embodiment are the same. In an alternative embodiment, taking the first pixel as an example, as shown in fig. 10, the first pixel includes:
a photodiode 511;
a driving thin film transistor 512 driving the photodiode 511;
a light shielding layer 513 covering the photodiode 511 and the driving thin film transistor 512.
As shown in fig. 10, the projection of the light shielding layer 513 covers the photodiode 511 and the driving thin film transistor 512 to ensure a light shielding effect. In this embodiment, the bias line is electrically connected to the photodiode, and the light shielding layer and the bias line may be formed by the same process.
Illustratively, the driving thin film transistor 512 includes:
a gate electrode 5121 formed on the substrate 6, a gate insulating layer 5123 covering the gate electrode 5122, an active layer 5124 formed on the gate insulating layer 5123, and a source electrode 5125 and a drain electrode 5126 of the active layer 5124, respectively, wherein one of the source electrode 5125 and the drain electrode 5126 is electrically connected to the photodiode 511 through a connection line.
The first pixel further includes:
an insulating layer 5127 covering the source electrode, the drain electrode, and the photodiode;
a planarization layer 5128 is formed on the insulating layer. Illustratively, the light shielding layer 513 of the present embodiment is formed on the planarization layer 5128.
The embodiment of the invention designs the structure of the flat panel detector, and combines the image processing method according to the structural design, thereby eliminating the boundary line in the AED mode and improving the image precision of the image to be processed.
Another embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements: determining a first reference value from a first reference image, the first reference image being generated by the flat panel detector based on a first scan time; acquiring a second reference image generated in a second scanning time when the flat panel detector is in an automatic exposure detection mode, and determining a second reference value according to the second reference image; and compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value, wherein the image to be processed comprises a dividing line extending along the row direction.
In practical applications, the computer-readable storage medium may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
As shown in fig. 11, another embodiment of the present invention provides a schematic structural diagram of a computer device. The computer device 12 shown in fig. 11 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in FIG. 11, the computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 11, commonly referred to as a "hard disk drive"). Although not shown in fig. 11, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 20. As shown in fig. 11, the network adapter 20 communicates with other modules of the computer device 12 via the bus 18. It should be appreciated that although not shown in fig. 11, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processor unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, to implement an image processing method provided by an embodiment of the present invention.
It should be understood that the foregoing examples of the present invention are provided merely for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention, and that various other changes and modifications may be made therein by one skilled in the art without departing from the spirit and scope of the present invention as defined by the appended claims.

Claims (12)

1. The image processing method is characterized by being applied to a flat panel detector, wherein the flat panel detector comprises a second pixel used for lighting, and a first pixel and a third pixel which are respectively positioned at two sides of the second pixel in the column direction, wherein the first pixel and the third pixel do not light;
the method comprises the following steps:
determining a first reference value from a first reference image, the first reference image being generated by the flat panel detector based on a first scan time;
acquiring a second reference image generated in a second scanning time when the flat panel detector is in an automatic exposure detection mode, and determining a second reference value according to the second reference image;
and compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value, wherein the image to be processed comprises a dividing line extending along the row direction.
2. The method of claim 1, wherein determining a first reference value from a first reference image further comprises:
determining a first average gray value of the first pixel and a second average gray value of the second pixel in the first reference image;
and obtaining the first reference value according to the first average gray value and the second average gray value.
3. The method of claim 1, wherein the second reference image forms the demarcation line at a location of a second pixel row when exposure occurs while the flat panel detector is in an auto exposure detection mode;
the acquiring the second reference image generated in the second scanning time by the flat panel detector in the automatic exposure detection mode, and determining a second reference value according to the second reference image, further includes:
taking a dividing line of the second reference image as a subarea line to obtain a third average gray value of a first integration area of the subarea line towards the first pixel and a fourth average gray value of a second integration area of the subarea line towards the third pixel, wherein the first integration area is formed from a first pixel positioned in the first line to a second pixel corresponding to the subarea line in the column direction, the second integration area is formed from a third pixel positioned in the last line to a second sub-pixel adjacent to the subarea line in the column direction, and the first integration area and the second integration area are not overlapped;
And determining the second reference value according to the third average gray level value and the fourth average gray level value.
4. The method of claim 1, wherein compensating the image to be processed generated by the flat panel detector based on the first reference value and the second reference value further comprises:
determining the gray compensation value according to the first reference value and the second reference value;
and compensating the image to be processed according to the gray compensation value.
5. The method according to claim 4, wherein the image to be processed of the flat panel detector forms the dividing line at a position of a second pixel row when exposure occurs in the automatic exposure detection mode of the flat panel detector;
the compensating the image to be processed according to the gray compensation value further comprises:
taking the dividing line of the image to be processed as a subarea row, determining a first integral area and a second integral area of the image to be processed, and determining a second pixel to be compensated in the first integral area or the second integral area of the image to be processed;
and compensating the original gray value of the second pixel to be compensated according to the gray compensation value, wherein the original gray value is the gray value of the second pixel to be compensated for the preset integration time after exposure.
6. The method according to any one of claims 1 to 5, wherein before the determining the first reference value from the first reference image, the method further comprises:
and performing correction processing on the flat panel detector, wherein the correction processing comprises gain correction and dark field correction.
7. An image processing apparatus for performing the method of any one of claims 1 to 6, the apparatus comprising;
the first reference value determining module is used for generating a first reference image based on a first scanning time and determining a first reference value according to the first reference image;
a second reference value determining module for acquiring a second reference image generated in a second scanning time when the flat panel detector is in the automatic exposure detection mode, and determining a second reference value according to the second reference image
And the image to be processed compensation module is used for compensating the image to be processed generated by the flat panel detector according to the first reference value and the second reference value, wherein the image to be processed comprises a dividing line extending along the row direction.
8. A flat panel detector comprising the image processing apparatus of claim 7.
9. The flat panel detector of claim 8, wherein the flat panel detector further comprises:
A second pixel for lighting;
the first pixels and the third pixels are respectively positioned at two sides of the second pixel column direction, and the first pixels and the third pixels do not collect light;
wherein the first pixel and the third pixel each include: a photodiode and a driving thin film transistor driving the photodiode;
the first pixels are at least one row, and the third pixels are at least one row.
10. The flat panel detector of claim 9, wherein the first pixel and the third pixel further comprise: and the light shielding layers cover the photodiodes and the driving thin film transistors, and projections of the light shielding layers cover the photodiodes and the driving thin film transistors.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-6 when the program is executed by the processor.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN202111662769.6A 2021-12-30 2021-12-30 Image processing method, device, flat panel detector, equipment and storage medium Pending CN116416188A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111662769.6A CN116416188A (en) 2021-12-30 2021-12-30 Image processing method, device, flat panel detector, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111662769.6A CN116416188A (en) 2021-12-30 2021-12-30 Image processing method, device, flat panel detector, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116416188A true CN116416188A (en) 2023-07-11

Family

ID=87049966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111662769.6A Pending CN116416188A (en) 2021-12-30 2021-12-30 Image processing method, device, flat panel detector, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116416188A (en)

Similar Documents

Publication Publication Date Title
US10537295B2 (en) Radiation imaging apparatus and radiation imaging system
EP1487193B1 (en) Method and apparatus for correcting defect pixels in radiation imaging, computer program and computer-readable recording medium
US7522199B2 (en) Imaging device with light-shielding region, control method thereof and CMOS image sensor with light-shielding region
US9201150B2 (en) Suppression of direct detection events in X-ray detectors
WO2003021294A1 (en) Method and apparatus for identifying and correcting line artifacts in a solid state x-ray detector
KR20010050876A (en) Method for detecting a pixel defect and image processing device
CN108172659A (en) The generation method of flat panel detector and its ghost tables of data, ghost compensation correction method
JP2007028151A (en) Image processing apparatus
CN110740314B (en) Method and system for correcting defective pixel of color line array camera
CN109709597B (en) Gain correction method for flat panel detector
JP2003319264A (en) Two-dimensional image photographing device
CN113766878B (en) X-ray flat panel detector and image correction method thereof
US7688947B2 (en) Method for reducing sensitivity modulation and lag in electronic imagers
CN116704048B (en) Double-light registration method
CN116416188A (en) Image processing method, device, flat panel detector, equipment and storage medium
KR100741733B1 (en) Method for determining center point of lens and image pickup apparatus having function of correcting center point of lens
CN109740565B (en) Signal acquisition device, screen signal acquisition device and method
US9596460B2 (en) Mapping electrical crosstalk in pixelated sensor arrays
TWI760934B (en) Mura compensation method for display panel, system, electronic device, and storage medium
US10791290B2 (en) Image processing apparatus, image processing method, and non-transitory computer readable recording medium
TWI286839B (en) Signal processing method and image acquiring device
US10791289B2 (en) Image processing apparatus, image processing method, and non-transitory computer readable recording medium
CN113822940B (en) Flat field correction calibration method, device and system, computer equipment and medium
JP2008236787A (en) Solid-state imaging apparatus and fixed pattern noise elimination method thereof
US11781914B2 (en) Computational radiation tolerance for high quality infrared focal plane arrays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination