WO2022270291A1 - Image processing device, and image processing method - Google Patents

Image processing device, and image processing method Download PDF

Info

Publication number
WO2022270291A1
WO2022270291A1 PCT/JP2022/022826 JP2022022826W WO2022270291A1 WO 2022270291 A1 WO2022270291 A1 WO 2022270291A1 JP 2022022826 W JP2022022826 W JP 2022022826W WO 2022270291 A1 WO2022270291 A1 WO 2022270291A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
orthogonal transform
image processing
transform coefficients
regions
Prior art date
Application number
PCT/JP2022/022826
Other languages
French (fr)
Japanese (ja)
Inventor
巧実 岡崎
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2022270291A1 publication Critical patent/WO2022270291A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]

Definitions

  • the present disclosure relates to an image processing device and an image processing method.
  • Patent Literature 1 discloses a fundus image processing apparatus capable of automatically extracting a blood vessel region from a fundus image while maintaining the blood vessel fine shape such as the blood vessel diameter.
  • An object of the present disclosure is to provide an image processing device and an image processing method capable of emphasizing specific elements in image data compared to other elements.
  • One aspect of the present disclosure provides an image processing device that includes a processor and a storage device that stores image data.
  • the processor divides the image into a plurality of regions, performs an orthogonal transform on each of the plurality of regions, and uses a plurality of orthogonal transform coefficients calculated for each of the plurality of regions to obtain a representative representative of each region.
  • a value is calculated, and the sharpness of the image is adjusted based on the value calculated based on the calculated at least one or more representative values and the plurality of orthogonal transform coefficients.
  • Another aspect of the present disclosure provides an image processing method for performing image processing on image data stored in a storage device by a processor.
  • This image processing method divides an image into a plurality of regions, performs an orthogonal transform on each of the plurality of regions, uses a plurality of orthogonal transform coefficients calculated for each of the plurality of regions, and converts each region into a
  • the method includes calculating a representative value and adjusting the sharpness of the image based on a value calculated based on at least one or more calculated representative values and the plurality of orthogonal transform coefficients.
  • FIG. 1 is a block diagram showing a configuration example of an image processing device according to an embodiment of the present disclosure
  • FIG. 2 is a flow chart illustrating a procedure of image processing executed by a processor of the image processing apparatus of FIG. 1; Schematic diagram for explaining the DCT executed in step S7 of FIG. Flowchart illustrating the detailed procedure of the DCT coefficient enhancement process S8 in FIG. Schematic diagram for explaining the grouping process S81 in FIG. Schematic diagram for explaining normalization processing S82 in FIG. Graph illustrating magnification for first to fourth groups of DCT blocks
  • the blood vessels in the deep part are blurred compared to the blood vessels near the body surface, and the blood vessels may not be distinguishable.
  • the present inventors have developed an image processing apparatus and an image processing method that emphasize specific elements in image data compared to other elements so that the specific elements can be easily identified. Found it.
  • FIG. 1 is a block diagram showing a configuration example of an image processing device 100 according to an embodiment of the present disclosure.
  • the image processing apparatus 100 includes a processor 1 , a storage device 2 , an input interface (I/F) 3 and an output interface (I/F) 4 .
  • the processor 1 implements the functions of the image processing apparatus 100 by executing information processing.
  • the processor 1 is composed of circuits such as a CPU, MPU, and FPGA. Such information processing is realized, for example, by the processor 1 executing a program 21 stored in the storage device 2 .
  • the storage device 2 is a recording medium for recording various information including the program 21 and data necessary for realizing the functions of the image processing device 100 .
  • the storage device 2 is realized by, for example, a semiconductor storage device such as a flash memory, a solid state drive (SSD), a magnetic storage device such as a hard disk drive (HDD), or other recording media alone or in combination.
  • the storage device 2 may include volatile memory such as SRAM and DRAM.
  • the input interface 3 is an interface circuit that connects the image processing device 100 and an external device in order to input information such as the image data 11 to the image processing device 100 .
  • an external device is, for example, another information processing terminal (not shown) or a device such as a camera that acquires the image data 11 .
  • the input interface 3 may be a communication circuit that performs data communication according to existing wired communication standards or wireless communication standards.
  • the output interface 4 is an interface circuit that connects the image processing device 100 and an external output device in order to output information from the image processing device 100 .
  • Such an output device is for example the display 12 .
  • the output interface 4 may be a communication circuit that is connected to the network 13 and performs data communication according to existing wired communication standards or wireless communication standards.
  • the input interface 3 and output interface 4 may be realized by similar hardware.
  • FIG. 2 is a flow chart illustrating the procedure of image processing executed by the processor 1 of the image processing apparatus 100 of FIG.
  • step S1 of FIG. 2 the processor 1 acquires the image data 11.
  • the image data 11 may be input to the processor 1 via the input interface 3 as shown in FIG. 1, or may be stored in the storage device 2 in advance.
  • step S2 of FIG. 2 the processor 1 performs noise removal processing on the image represented by the image data 11.
  • the image represented by the image data 11 may also be expressed as "image 11" using the same reference numerals.
  • the processor 1 performs a flattening process to increase the contrast of the image 11.
  • FIG. For example, the processor 1 executes histogram equalization processing for widening the luminance value distribution for the luminance value of each pixel of the image 11 .
  • the processor 1 executes gamma correction processing.
  • the processor 1 performs other brightness correction processing. For example, the processor 1 determines each pixel value of the output image not only by using the corresponding input pixel value, but also by using the pixel values surrounding the corresponding input pixel.
  • the processor 1 identifies an object in the input image by comparing the luminance value of the input pixel with the average luminance value of pixels surrounding the input pixel, and makes the object clear in the output image. Correct the luminance value to
  • step S6 of FIG. 2 the processor 1 performs resizing processing for adjusting at least one of the size of the image 11 and the size of the DCT block so that the width of the object such as the blood vessel is included in the DCT block, which will be described later. to run.
  • the “size” represents the dimension of the image represented by the image 11 , for example, the vertical and/or horizontal size of the image represented by the image 11 .
  • the processor 1 reduces the size of the image 11 so that the width of the blood vessel is contained within the DCT block. adjust.
  • the ratio between the image size before resizing and the image size after resizing is determined by the processor 1 detecting the width of an object such as a blood vessel in the image 11, and based on the detection result, the size of the image 11, and the size of the DCT block. can be calculated by Alternatively, the ratio between the image size before resizing and the image size after resizing may be input to the processor 1 by the user via an input device such as a keyboard or touch panel and the input interface 3 .
  • the processor 1 divides the image 11 processed in steps S2 to S6 of FIG. 2 by M pixels in the horizontal direction (x direction) and M pixels in the vertical direction (y direction). ) having a size of N pixels (herein referred to as “DCT blocks”) B ij , and discrete cosine transform (DCT) is performed on each block.
  • DCT blocks are integers of 2 or more
  • DCT is an example of "orthogonal transform" in this disclosure.
  • Multiple DCT blocks are an example of "multiple regions" in this disclosure.
  • i and j are integers equal to or greater than 0, and B ij indicates the i-th DCT block in the horizontal direction (x direction) and the j-th DCT block in the vertical direction (y direction) in the image 11 .
  • each block of image 11 is represented as a sum of base images with various frequency components.
  • An example of 64 base images corresponding to each DCT coefficient F(u, v) is shown in the uv coordinate system indicated by the arrow of "DCT" in FIG.
  • the processor 1 transforms the image data f(x, y) of each DCT block into image data having frequency components, that is, DCT coefficients F(u, v) by the following equation (1). do.
  • the transformation basis is cos ⁇ .
  • the processor 1 executes DCT coefficient enhancement processing. For example, processor 1 multiplies frequency components F(u, v) of DCT coefficients in each DCT block that satisfy a specific condition by more than one. A predetermined value may be added instead of multiplying the magnification. A detailed procedure of the DCT coefficient enhancement processing S8 will be described later.
  • the processor 1 performs an inverse discrete cosine transform (IDCT) on the DCT coefficients F(u, v) enhanced by the DCT coefficient enhancement processing S8.
  • IDCT inverse discrete cosine transform
  • the processor 1 executes a reverse resizing process corresponding to the resizing process S6. Specifically, the image 11 after the IDCT processing S9 has the size resized in the resizing processing S6. size.
  • FIG. 4 is a flowchart illustrating a detailed procedure of the DCT coefficient enhancement processing S8 of FIG.
  • FIG. 5 is a schematic diagram for explaining the grouping process S81 of FIG.
  • the right diagram of FIG. 5 is a schematic diagram showing the relationship between the magnitudes of the DCT coefficients of the DCT block B05 of the image 11 shown in the left diagram of FIG.
  • the numbers in the right diagram of FIG. 5 indicate the order of magnitude of the absolute values of the DCT coefficients.
  • the processor 1 classifies the DCT coefficients having the second to fourth magnitudes into the first group, and classifies the DCT coefficients having the fifth to eighth magnitudes into the second group. Then, the DCT coefficients having magnitudes of the 9th to 16th ranks are classified into the third group, and the DCT coefficients having magnitudes of the 17th rank and below are classified into the fourth group.
  • the first group is indicated by horizontal line hatching
  • the second group is indicated by dot hatching
  • the third group is indicated by diagonal line hatching.
  • the DC component and the fourth group are not hatched. Also, description of the order of the smaller DCT coefficients, which do not belong to the fourth group, has been omitted.
  • each DCT block is grouped into a first group, a second group, . . .
  • processor 1 determines the group with the largest absolute value of the DCT coefficients among all groups of all DCT blocks. Specifically, the processor 1 first obtains the sum r of the absolute values of the DCT coefficients belonging to the first group for each of all DCT blocks. The calculated sum r is an example of a "representative value" representing each DCT block. Processor 1 then determines the largest maximum representative value r_max among the representative values r determined for all DCT blocks.
  • FIG. 6 is a schematic diagram for explaining step S82 in FIG.
  • the upper diagram of FIG. 6 shows the representative value r in the first group of each DCT block of the image 11 .
  • the maximum representative value r max 224.
  • processor 1 normalizes the DCT coefficients in each DCT block based on the maximum representative value r_max .
  • processor 1 divides the DCT coefficients in each DCT block by the maximum representative value rmax.
  • the lower diagram of FIG. 6 shows the normalized value rn of the sum of the absolute values of the DCT coefficients in the first group of each DCT block.
  • the processor 1, in each group of each DCT block, A magnification m is determined according to the normalization value rn calculated in the normalization processing S82. For example, the processor 1 sets the magnification m 1 for the first group, the magnification m 2 for the second group, the magnification m 2 for the third group, and Determine the magnification m 3 and the magnification m 4 for the fourth group.
  • FIG. 7 is a graph illustrating such scaling factors m 1 -m 4 as a function of normalization value r n .
  • the solid line indicates m1
  • the dashed line indicates m2
  • the dashed line indicates m3
  • the dotted line indicates m4 .
  • the relationship between the normalized value r n and the magnifications m 1 to m 4 as shown in the graph of FIG. 7 may be stored in advance in the storage device 2 as a table.
  • the processor 1 refers to the relationship between the normalization value r n and the magnifications m 1 to m 4 stored in the storage device 2, and calculates the magnifications m 1 to m according to the magnitude of the normalization value r n . 4 is determined.
  • the processor 1 multiplies the corresponding DCT coefficients by the scaling factors determined at step S83.
  • the magnification m1 for the first group and the magnification m2 for the second group are set to 1 or more, and the magnification m3 for the third group and the magnification m4 for the fourth group are set to 1 or less.
  • m 1 >m 2 and m 3 >m 4 is set to For example, m 1 >m 2 and m 3 >m 4 .
  • the image processing apparatus 100 multiplies the DCT coefficients of the first group, which contribute greatly to the construction of the image 11, by the maximum magnification m1, and contributes to the construction of the image 11 to a lesser extent than the first group. Multiply the DCT coefficients of the second group by the maximum scale factor m2 .
  • the image processing apparatus 100 can regard the DCT coefficients of the third group and the fourth group, which contribute relatively little to the configuration of the image 11, as noise that does not contribute to the configuration of the image 11. attenuate by multiplying by a factor of .
  • the magnification m1 for the first group and the magnification m2 for the second group take maximum values in the range where the normalized value r n is greater than 0 and less than 1. In this way, blocks with relatively few frequency components are maximized. Therefore, objects existing at different depths can be corrected to have similar appearances.
  • the image processing apparatus 100 can sharpen the object in the output image data by changing the magnification according to the degree of contribution to the configuration of the original image data. For example, in the original image data obtained by imaging a living body, even if deep blood vessels are unclear compared to blood vessels near the body surface, the image processing apparatus 100 can output , deep vessels or both.
  • the processor 1 divides the image 11 into a plurality of DCT blocks and performs DCT on each of the plurality of DCT blocks.
  • the processor 1 calculates a representative value representing each DCT block using a plurality of DCT coefficients calculated for each of the plurality of DCT blocks, and calculates a value calculated based on at least one or more of the calculated representative values. , and a plurality of DCT coefficients.
  • the image processing apparatus 100 can emphasize specific elements in the image 11 compared to other elements.
  • a processor 1 classifies a plurality of DCT coefficients into a plurality of groups based on absolute values, and for each classified group, a value calculated based on at least one or more calculated representative values, a plurality of DCT coefficients, and The sharpness of image 11 may be adjusted based on . For example, processor 1 may multiply each absolute value of a plurality of DCT coefficients in each of the sorted groups by different scale factors.
  • the processor 1 may determine the scaling factor in each of the multiple groups based on the absolute values of the DCT coefficients in each of the multiple groups.
  • each DCT coefficient of the DCT block can be emphasized according to the extent to which each group contributes to the configuration of the image 11.
  • the processor 1 selects a predetermined representative value from the representative values, normalizes the representative value representing each region based on the predetermined representative value, and converts the DCT block represented by the normalized representative value into a DCT block.
  • the sharpness of the image may be adjusted based on the included DCT and the value according to the normalized representative value.
  • processor 1 may classify a plurality of DCT coefficients into a plurality of groups in descending order of absolute value (step S81). In this case, the processor 1 calculates the sum of the absolute values of the DCT coefficients in the group having the largest absolute value of the plurality of DCT coefficients among the plurality of groups as the maximum representative value r max , and calculates the calculated maximum representative value r Based on max , the DCT coefficients in each of the plurality of DCT blocks are normalized (step S82). Processor 1 multiplies the normalized DCT coefficients by a scale factor corresponding to their absolute values (steps S83 and S84).
  • the image processing device 100 may execute image sharpening processing for sharpening the object appearing in the image 11 .
  • the processor 1 before dividing the image 11 into a plurality of DCT blocks, processes the size and size of the image 11 so that the width of the object is the width contained within each DCT block. Adjust at least one of the sizes of the plurality of DCT blocks.
  • the width of the object fits within the DCT block, so the object can be sharpened within the DCT block.
  • the processor 1 executes the grouping process S81, and then performs the normalization process S82 to determine the magnification m (step S83).
  • the present disclosure is not limited to this, and after executing the grouping process S81, if the representative value r of the DCT block is larger than a predetermined threshold, the processor 1 multiplies the relevant value by a factor larger than 1.
  • the absolute value of the DCT coefficients may be increased. According to this configuration, the image processing apparatus 100 can emphasize only the DCT coefficients with relatively large absolute values among the plurality of DCT coefficients of the DCT block in the image 11 .
  • processor 1 determines the magnification m (step S83) in the DCT coefficient enhancement processing S8.
  • the present disclosure is not limited to this, and after the processor 1 executes the grouping process S81, instead of steps S82 and S83 in FIG. 4, the user may manually determine the magnification m.
  • processor 1 multiplies the corresponding DCT coefficients by the user-determined scaling factor m. For example, a user determines the scale factor m using a graphical user interface (GUI). The user may adjust the magnification m using the GUI while checking the image 11 that has undergone the multiplication processing S84 of the magnification m and the IDCT processing S9 using the display 12 or the like.
  • GUI graphical user interface
  • image data 11 generated by imaging an object covered with a shield using light having a specific wavelength has been described.
  • the image data 11 that can be processed by the image processing apparatus 100 is not limited to optically captured data, and may be ultrasound images, magnetic resonance imaging (MRI), or the like.
  • DCT was described as an example of orthogonal transform, but orthogonal transform is not limited to this, and may be discrete Fourier transform (DFT), for example.
  • DFT includes Fast Fourier Transform (FFT).
  • the present disclosure is applicable to image processing technology.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An image processing device according to an embodiment of the present disclosure comprises a processor, and a storage device that stores data relating to an image. The processor: divides the image into a plurality of regions; subjects each of the plurality of regions to an orthogonal transformation; uses a plurality of orthogonal transformation coefficients calculated for each of the plurality of regions to calculate a representative value representing each region; and adjusts the sharpness of the image on the basis of a value calculated on the basis of at least one of the calculated representative values, and the plurality of orthogonal transformation coefficients.

Description

画像処理装置及び画像処理方法Image processing device and image processing method
 本開示は、画像処理装置及び画像処理方法に関する。 The present disclosure relates to an image processing device and an image processing method.
 画像内に映る対象物を、コントラストを上げる等の手段により鮮明化する技術が知られている。例えば、特許文献1は、眼底画像から血管径などの血管微細形状を維持したまま血管領域を自動的に抽出することが可能な眼底画像処理装置を開示する。 A known technique is to sharpen an object in an image by means such as increasing the contrast. For example, Patent Literature 1 discloses a fundus image processing apparatus capable of automatically extracting a blood vessel region from a fundus image while maintaining the blood vessel fine shape such as the blood vessel diameter.
特開2018-23602号公報Japanese Unexamined Patent Application Publication No. 2018-23602
 本開示は、画像のデータにおいて特定の要素を他の要素に比べて強調することができる画像処理装置及び画像処理方法を提供することを目的とする。 An object of the present disclosure is to provide an image processing device and an image processing method capable of emphasizing specific elements in image data compared to other elements.
 本開示の一態様は、プロセッサと、画像のデータを記憶した記憶装置とを備える画像処理装置を提供する。プロセッサは、画像を複数の領域に分割し、複数の領域のそれぞれに対して直交変換を実行し、複数の領域のそれぞれについて算出された複数の直交変換係数を用いて、各領域を代表する代表値を算出し、算出した少なくとも1つ以上の代表値に基づいて算出した値と、複数の直交変換係数とに基づいて画像の鮮明度を調整する。 One aspect of the present disclosure provides an image processing device that includes a processor and a storage device that stores image data. The processor divides the image into a plurality of regions, performs an orthogonal transform on each of the plurality of regions, and uses a plurality of orthogonal transform coefficients calculated for each of the plurality of regions to obtain a representative representative of each region. A value is calculated, and the sharpness of the image is adjusted based on the value calculated based on the calculated at least one or more representative values and the plurality of orthogonal transform coefficients.
 本開示の他の態様は、プロセッサによって、記憶装置に記憶された画像のデータに対して画像処理を実行する画像処理方法を提供する。この画像処理方法は、画像を複数の領域に分割し、複数の領域のそれぞれに対して直交変換を実行し、複数の領域のそれぞれについて算出された複数の直交変換係数を用いて、各領域を代表する代表値を算出し、算出した少なくとも1つ以上の代表値に基づいて算出した値と、複数の直交変換係数とに基づいて画像の鮮明度を調整することを含む。 Another aspect of the present disclosure provides an image processing method for performing image processing on image data stored in a storage device by a processor. This image processing method divides an image into a plurality of regions, performs an orthogonal transform on each of the plurality of regions, uses a plurality of orthogonal transform coefficients calculated for each of the plurality of regions, and converts each region into a The method includes calculating a representative value and adjusting the sharpness of the image based on a value calculated based on at least one or more calculated representative values and the plurality of orthogonal transform coefficients.
 本開示に係る画像処理装置及び画像処理方法によれば、画像のデータにおいて特定の要素を他の要素に比べて強調することができる。 According to the image processing device and image processing method according to the present disclosure, it is possible to emphasize specific elements in image data compared to other elements.
本開示の実施形態に係る画像処理装置の構成例を示すブロック図1 is a block diagram showing a configuration example of an image processing device according to an embodiment of the present disclosure; FIG. 図1の画像処理装置のプロセッサによって実行される画像処理の手順を例示するフローチャート2 is a flow chart illustrating a procedure of image processing executed by a processor of the image processing apparatus of FIG. 1; 図2のステップS7で実行されるDCTを説明するための模式図Schematic diagram for explaining the DCT executed in step S7 of FIG. 図2のDCT係数強調処理S8の詳細な手順を例示するフローチャートFlowchart illustrating the detailed procedure of the DCT coefficient enhancement process S8 in FIG. 図4の群分け処理S81を説明するための模式図Schematic diagram for explaining the grouping process S81 in FIG. 図4の正規化処理S82を説明するための模式図Schematic diagram for explaining normalization processing S82 in FIG. DCTブロックの第1~第4群に対する倍率を例示するグラフGraph illustrating magnification for first to fourth groups of DCT blocks
(本開示の基礎となった知見)
 従来技術においては、遮蔽物に覆われた対象物を特定波長を有する光を用いて撮像して画像を生成した場合、遮蔽物による光の減衰、散乱等のため、当該画像において対象物が不鮮明となる課題がある。ここで、「対象物」の一例は、血管、骨、内臓等の組織であり、「遮蔽物」の一例は、対象物を覆う皮膚、脂肪、筋肉等の組織である。「特定波長を有する光」の一例は、可視光、赤外光、紫外光、X線等の光である。
(Findings on which this disclosure is based)
In the conventional technology, when an image is generated by imaging an object covered by a shield using light having a specific wavelength, the object is not clear in the image due to attenuation and scattering of light by the shield. There is a problem to be Here, examples of the "object" are tissues such as blood vessels, bones and internal organs, and examples of the "shield" are tissues covering the object such as skin, fat and muscle. An example of "light having a specific wavelength" is light such as visible light, infrared light, ultraviolet light, and X-rays.
 例えば、カメラを用いて人体を撮像した場合、撮像画像では、体表面の近くにある血管に比べて、深部にある血管が不鮮明となり、血管が判別できないことがある。 For example, when the human body is imaged using a camera, in the captured image, the blood vessels in the deep part are blurred compared to the blood vessels near the body surface, and the blood vessels may not be distinguishable.
 本発明者らは、上記課題を解決するために、画像のデータにおいて特定の要素を他の要素に比べて強調し、当該特定の要素を容易に識別可能とする画像処理装置及び画像処理方法を見出した。 In order to solve the above problems, the present inventors have developed an image processing apparatus and an image processing method that emphasize specific elements in image data compared to other elements so that the specific elements can be easily identified. Found it.
 以下、適宜図面を参照しながら、本開示の実施形態を詳細に説明する。但し、必要以上に詳細な説明は省略する場合がある。例えば、既によく知られた事項の詳細説明や実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になるのを避け、当業者の理解を容易にするためである。なお、出願人は、当業者が本開示を十分に理解するために添付図面および以下の説明を提供するのであって、これらによって特許請求の範囲に記載の主題を限定することを意図するものではない。 Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings as appropriate. However, more detailed description than necessary may be omitted. For example, detailed descriptions of well-known matters and redundant descriptions of substantially the same configurations may be omitted. This is to avoid unnecessary verbosity in the following description and to facilitate understanding by those skilled in the art. It is noted that Applicants provide the accompanying drawings and the following description for a full understanding of the present disclosure by those skilled in the art and are not intended to limit the claimed subject matter thereby. No.
(実施形態)
[1.構成]
 図1は、本開示の実施形態に係る画像処理装置100の構成例を示すブロック図である。画像処理装置100は、プロセッサ1と、記憶装置2と、入力インタフェース(I/F)3と、出力インタフェース(I/F)4とを備える。
(embodiment)
[1. composition]
FIG. 1 is a block diagram showing a configuration example of an image processing device 100 according to an embodiment of the present disclosure. The image processing apparatus 100 includes a processor 1 , a storage device 2 , an input interface (I/F) 3 and an output interface (I/F) 4 .
 プロセッサ1は、情報処理を実行して画像処理装置100の機能を実現する。プロセッサ1は、CPU、MPU、FPGA等の回路で構成される。このような情報処理は、例えば、プロセッサ1が記憶装置2に格納されたプログラム21を実行することにより実現される。 The processor 1 implements the functions of the image processing apparatus 100 by executing information processing. The processor 1 is composed of circuits such as a CPU, MPU, and FPGA. Such information processing is realized, for example, by the processor 1 executing a program 21 stored in the storage device 2 .
 記憶装置2は、画像処理装置100の機能を実現するために必要なプログラム21及びデータを含む種々の情報を記録する記録媒体である。記憶装置2は、例えば、フラッシュメモリ、ソリッド・ステート・ドライブ(SSD)等の半導体記憶装置、ハードディスクドライブ(HDD)等の磁気記憶装置、その他の記録媒体単独で又はそれらを組み合わせて実現される。記憶装置2は、SRAM、DRAM等の揮発性メモリを含んでもよい。 The storage device 2 is a recording medium for recording various information including the program 21 and data necessary for realizing the functions of the image processing device 100 . The storage device 2 is realized by, for example, a semiconductor storage device such as a flash memory, a solid state drive (SSD), a magnetic storage device such as a hard disk drive (HDD), or other recording media alone or in combination. The storage device 2 may include volatile memory such as SRAM and DRAM.
 入力インタフェース3は、画像データ11等の情報を画像処理装置100に入力するために、画像処理装置100と外部機器とを接続するインタフェース回路である。このような外部機器は、例えば、図示しない他の情報処理端末、画像データ11を取得するカメラ等の装置である。入力インタフェース3は、既存の有線通信規格又は無線通信規格に従ってデータ通信を行う通信回路であってもよい。 The input interface 3 is an interface circuit that connects the image processing device 100 and an external device in order to input information such as the image data 11 to the image processing device 100 . Such an external device is, for example, another information processing terminal (not shown) or a device such as a camera that acquires the image data 11 . The input interface 3 may be a communication circuit that performs data communication according to existing wired communication standards or wireless communication standards.
 出力インタフェース4は、画像処理装置100から情報を出力するために、画像処理装置100と外部の出力装置とを接続するインタフェース回路である。このような出力装置は、例えばディスプレイ12である。出力インタフェース4は、既存の有線通信規格又は無線通信規格に従ってネットワーク13に接続されてデータ通信を行う通信回路であってもよい。入力インタフェース3及び出力インタフェース4は、同様のハードウェアにより実現されてもよい。 The output interface 4 is an interface circuit that connects the image processing device 100 and an external output device in order to output information from the image processing device 100 . Such an output device is for example the display 12 . The output interface 4 may be a communication circuit that is connected to the network 13 and performs data communication according to existing wired communication standards or wireless communication standards. The input interface 3 and output interface 4 may be realized by similar hardware.
[2.動作]
[2-1.全体動作]
 図2は、図1の画像処理装置100のプロセッサ1によって実行される画像処理の手順を例示するフローチャートである。
[2. motion]
[2-1. Overall operation]
FIG. 2 is a flow chart illustrating the procedure of image processing executed by the processor 1 of the image processing apparatus 100 of FIG.
 図2のステップS1において、プロセッサ1は、画像データ11を取得する。画像データ11は、図1に示すように入力インタフェース3を介してプロセッサ1に入力されてもよいし、記憶装置2に予め格納されていてもよい。 In step S1 of FIG. 2, the processor 1 acquires the image data 11. The image data 11 may be input to the processor 1 via the input interface 3 as shown in FIG. 1, or may be stored in the storage device 2 in advance.
 図2のステップS2において、プロセッサ1は、画像データ11によって表される画像に対してノイズ除去処理を実行する。なお、本明細書では、画像データ11によって表される画像についても、同じ参照符号を用いて「画像11」と表現することがある。 In step S2 of FIG. 2, the processor 1 performs noise removal processing on the image represented by the image data 11. In this specification, the image represented by the image data 11 may also be expressed as "image 11" using the same reference numerals.
 図2のステップS3において、プロセッサ1は、画像11のコントラストを上げるための平坦化処理を実行する。例えば、プロセッサ1は、画像11の各画素の輝度値について、輝度値分布を広げるヒストグラム平坦化処理を実行する。
 図2のステップS4において、プロセッサ1は、ガンマ補正処理を実行する。
 図2のステップS5において、プロセッサ1は、その他の輝度補正処理を実行する。例えば、プロセッサ1は、出力画像の各画素値を、対応する入力画素値だけを用いて求めるのではなく、対応する入力画素の周囲の画素値をも用いて求める。例えば、プロセッサ1は、入力画素の輝度値を、当該入力画素の周囲の画素における輝度値平均値と比較することにより、入力画像において対象物を特定し、出力画像において対象物が鮮明となるように輝度値を補正する。
At step S3 of FIG. 2, the processor 1 performs a flattening process to increase the contrast of the image 11. FIG. For example, the processor 1 executes histogram equalization processing for widening the luminance value distribution for the luminance value of each pixel of the image 11 .
At step S4 in FIG. 2, the processor 1 executes gamma correction processing.
At step S5 in FIG. 2, the processor 1 performs other brightness correction processing. For example, the processor 1 determines each pixel value of the output image not only by using the corresponding input pixel value, but also by using the pixel values surrounding the corresponding input pixel. For example, the processor 1 identifies an object in the input image by comparing the luminance value of the input pixel with the average luminance value of pixels surrounding the input pixel, and makes the object clear in the output image. Correct the luminance value to
 図2のステップS6において、プロセッサ1は、血管等の対象物の幅が後述のDCTブロック内に包含されるように、画像11のサイズ及びDCTブロックのサイズのうちの少なくとも一方を調整するリサイズ処理を実行する。ここで、「サイズ」とは、画像11によって表される画像の寸法を表し、例えば画像11によって表される画像の縦及び/又は横の大きさを表す。例えば、プロセッサ1は、DCTブロックと比較して血管が大きく、画像11における血管幅がDCTブロック内に包含されない場合、画像11のサイズを小さくして血管幅がDCTブロック内に包含されるように調整する。リサイズ処理前の画像サイズとリサイズ処理後の画像サイズとの比率は、プロセッサ1が画像11内の血管等の対象物の幅を検知し、検知結果と画像11のサイズ及びDCTブロックのサイズに基づいて計算してもよい。あるいは、リサイズ処理前の画像サイズとリサイズ処理後の画像サイズとの比率は、ユーザにより、キーボード、タッチパネル等の入力装置及び入力インタフェース3を介してプロセッサ1に入力されてもよい。 In step S6 of FIG. 2, the processor 1 performs resizing processing for adjusting at least one of the size of the image 11 and the size of the DCT block so that the width of the object such as the blood vessel is included in the DCT block, which will be described later. to run. Here, the “size” represents the dimension of the image represented by the image 11 , for example, the vertical and/or horizontal size of the image represented by the image 11 . For example, if the blood vessel is large compared to the DCT block and the width of the blood vessel in the image 11 is not contained within the DCT block, the processor 1 reduces the size of the image 11 so that the width of the blood vessel is contained within the DCT block. adjust. The ratio between the image size before resizing and the image size after resizing is determined by the processor 1 detecting the width of an object such as a blood vessel in the image 11, and based on the detection result, the size of the image 11, and the size of the DCT block. can be calculated by Alternatively, the ratio between the image size before resizing and the image size after resizing may be input to the processor 1 by the user via an input device such as a keyboard or touch panel and the input interface 3 .
 図2のステップS7において、プロセッサ1は、図3に示すように、図2のステップS2~S6の処理を受けた画像11を、それぞれ水平方向(x方向)にM画素、垂直方向(y方向)にN画素のサイズを有する複数のブロック(本明細書において、「DCTブロック」と呼ぶ。)Bijに分割し、各ブロックに対して離散コサイン変換(Discrete Cosine Transform、DCT)を実行する。ここで、M及びNは、2以上の整数であり、図3に示した例ではM=N=8である。DCTは、本開示の「直交変換」の一例である。複数のDCTブロックは、本開示の「複数の領域」の一例である。また、i及びjは、0以上の整数であり、Bijは、画像11において水平方向(x方向)にi番目、垂直方向(y方向)にj番目のDCTブロックを指す。 2, the processor 1, as shown in FIG. 3, divides the image 11 processed in steps S2 to S6 of FIG. 2 by M pixels in the horizontal direction (x direction) and M pixels in the vertical direction (y direction). ) having a size of N pixels (herein referred to as “DCT blocks”) B ij , and discrete cosine transform (DCT) is performed on each block. Here, M and N are integers of 2 or more, and M=N=8 in the example shown in FIG. DCT is an example of "orthogonal transform" in this disclosure. Multiple DCT blocks are an example of "multiple regions" in this disclosure. Also, i and j are integers equal to or greater than 0, and B ij indicates the i-th DCT block in the horizontal direction (x direction) and the j-th DCT block in the vertical direction (y direction) in the image 11 .
 DCTにより、画像11の各ブロックは、様々な周波数成分を有する基底画像の和として表される。図3の「DCT」の矢印の先のuv座標系には、各DCT係数F(u,v)に対応する64個の基底画像の例が示されている。例えば、ステップS7において、プロセッサ1は、次の式(1)により、各DCTブロックの画像データf(x,y)を、周波数成分を有する画像データ、すなわちDCT係数F(u,v)に変換する。変換基底はcosθである。u=v=0のときにおけるDCT係数F(0,0)を「直流成分」と呼び、u≠0かつv≠0のときにおけるDCT係数F(u,v)を「交流成分」と呼ぶ場合がある。なお式(1)においてCuおよびCvは定数項である。式(1)はあくまで一例であり、定数項の内容その他、具体的なパラメータの値は処理の目的に応じて任意に定めることができる。 By DCT, each block of image 11 is represented as a sum of base images with various frequency components. An example of 64 base images corresponding to each DCT coefficient F(u, v) is shown in the uv coordinate system indicated by the arrow of "DCT" in FIG. For example, in step S7, the processor 1 transforms the image data f(x, y) of each DCT block into image data having frequency components, that is, DCT coefficients F(u, v) by the following equation (1). do. The transformation basis is cos θ. A case where the DCT coefficient F(0,0) when u=v=0 is called a "DC component" and the DCT coefficient F(u,v) when u≠0 and v≠0 is called an "AC component" There is Note that Cu and Cv are constant terms in the formula (1). Formula (1) is merely an example, and the contents of constant terms and other specific parameter values can be arbitrarily determined according to the purpose of processing.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 図2のステップS8において、プロセッサ1は、DCT係数強調処理を実行する。例えば、プロセッサ1は、DCTブロックのそれぞれにおけるDCT係数の各周波数成分F(u,v)のうち、特定の条件を満たすものに1を超える倍率を乗じる。倍率を乗じる代わりに、所定値を加算してもよい。DCT係数強調処理S8の詳細な手順については後述する。 At step S8 in FIG. 2, the processor 1 executes DCT coefficient enhancement processing. For example, processor 1 multiplies frequency components F(u, v) of DCT coefficients in each DCT block that satisfy a specific condition by more than one. A predetermined value may be added instead of multiplying the magnification. A detailed procedure of the DCT coefficient enhancement processing S8 will be described later.
 図2のステップS9において、プロセッサ1は、DCT係数強調処理S8により強調されたDCT係数F(u,v)に対して逆離散コサイン変換(IDCT)を実行する。 At step S9 in FIG. 2, the processor 1 performs an inverse discrete cosine transform (IDCT) on the DCT coefficients F(u, v) enhanced by the DCT coefficient enhancement processing S8.
 図2のステップS10において、プロセッサ1は、リサイズ処理S6に対応する逆リサイズ処理を実行する。具体的には、IDCT処理S9の後の画像11は、リサイズ処理S6においてサイズ変更されたサイズを有するところ、逆リサイズ処理S10において、プロセッサ1は、リサイズ処理S6においてサイズ変更された画像を変更前のサイズに復元する。 At step S10 in FIG. 2, the processor 1 executes a reverse resizing process corresponding to the resizing process S6. Specifically, the image 11 after the IDCT processing S9 has the size resized in the resizing processing S6. size.
[2-2.DCT係数強調処理]
 図4は、図2のDCT係数強調処理S8の詳細な手順を例示するフローチャートである。
[2-2. DCT coefficient enhancement processing]
FIG. 4 is a flowchart illustrating a detailed procedure of the DCT coefficient enhancement processing S8 of FIG.
 図4のステップS81において、プロセッサ1は、各DCTブロックにおいて、DCT係数を絶対値の順に群分けする。図5は、図4の群分け処理S81を説明するための模式図である。図5の右図は、図5の左図に示した画像11のDCTブロックB05のDCT係数の大きさの関係を示す模式図である。図5の右図における数字は、DCT係数の絶対値の大きさの順位を示す。一例を挙げると、図5の右図の第1行第2列に示したDCT係数F05(1,0)の絶対値は、DCTブロックB05のDCT係数の全周波数成分の絶対値の中で9番目に大きい。 At step S81 in FIG. 4, the processor 1 groups the DCT coefficients in order of absolute value in each DCT block. FIG. 5 is a schematic diagram for explaining the grouping process S81 of FIG. The right diagram of FIG. 5 is a schematic diagram showing the relationship between the magnitudes of the DCT coefficients of the DCT block B05 of the image 11 shown in the left diagram of FIG. The numbers in the right diagram of FIG. 5 indicate the order of magnitude of the absolute values of the DCT coefficients. For example, the absolute value of the DCT coefficient F 05 (1, 0 ) shown in the first row and second column of the right diagram of FIG. 9th largest in
 例えば、群分け処理S81において、プロセッサ1は、第2~4位の大きさを有するDCT係数を第1群に分類し、第5~8位の大きさを有するDCT係数を第2群に分類し、第9~16位の大きさを有するDCT係数を第3群に分類し、第17位以下の大きさを有するDCT係数を第4群に分類する。図5では、第1群を水平線ハッチングで示し、第2群をドットハッチングで示し、第3群を斜線ハッチングで示している。直流成分と第4群にはハッチングを付していない。また、第4群にも属さない、より小さなDCT係数の順序の記載は省略した。群分け処理S81により、DCTブロックごとに、第1群、第2群、・・・のように群分けが行われる。 For example, in the grouping process S81, the processor 1 classifies the DCT coefficients having the second to fourth magnitudes into the first group, and classifies the DCT coefficients having the fifth to eighth magnitudes into the second group. Then, the DCT coefficients having magnitudes of the 9th to 16th ranks are classified into the third group, and the DCT coefficients having magnitudes of the 17th rank and below are classified into the fourth group. In FIG. 5, the first group is indicated by horizontal line hatching, the second group is indicated by dot hatching, and the third group is indicated by diagonal line hatching. The DC component and the fourth group are not hatched. Also, description of the order of the smaller DCT coefficients, which do not belong to the fourth group, has been omitted. By the grouping process S81, each DCT block is grouped into a first group, a second group, . . .
 図4のステップS82において、プロセッサ1は、全DCTブロックの全群の中から、DCT係数の絶対値が最も大きい群を決定する。具体的には、プロセッサ1は、まず全DCTブロックの各々について、第1群に属するDCT係数の絶対値の和rを求める。以下、求められた和rは、各DCTブロックを代表する「代表値」の一例である。その後プロセッサ1は、全DCTブロックについて求めた代表値rのうち、最も大きい最大代表値rmaxを決定する。 At step S82 of FIG. 4, processor 1 determines the group with the largest absolute value of the DCT coefficients among all groups of all DCT blocks. Specifically, the processor 1 first obtains the sum r of the absolute values of the DCT coefficients belonging to the first group for each of all DCT blocks. The calculated sum r is an example of a "representative value" representing each DCT block. Processor 1 then determines the largest maximum representative value r_max among the representative values r determined for all DCT blocks.
 図6は、図4のステップS82を説明するための模式図である。図6の上図は、画像11の各DCTブロックの第1群における代表値rを示している。図6の上図に示した例では、最大代表値rmax=224である。 FIG. 6 is a schematic diagram for explaining step S82 in FIG. The upper diagram of FIG. 6 shows the representative value r in the first group of each DCT block of the image 11 . In the example shown in the upper diagram of FIG. 6, the maximum representative value r max =224.
 図4のステップS82において、プロセッサ1は、最大代表値rmaxに基づいて、各DCTブロックにおけるDCT係数を正規化する。正規化の一例として、プロセッサ1は、各DCTブロックにおけるDCT係数を最大代表値rmaxで除算する。図6の下図は、各DCTブロックの第1群における、DCT係数の絶対値の和の正規化値rを示している。 At step S82 of FIG. 4, processor 1 normalizes the DCT coefficients in each DCT block based on the maximum representative value r_max . As an example of normalization, processor 1 divides the DCT coefficients in each DCT block by the maximum representative value rmax. The lower diagram of FIG. 6 shows the normalized value rn of the sum of the absolute values of the DCT coefficients in the first group of each DCT block.
 図4のステップS83において、プロセッサ1は、各DCTブロックの各群において、
正規化処理S82において算出された正規化値rに応じた倍率mを決定する。例えば、プロセッサ1は、図6の下図に示した正規化値rの大きさに応じて、対応するDCTブロックにおける第1群に対する倍率m、第2群に対する倍率m、第3群に対する倍率m、及び第4群に対する倍率mを決定する。
At step S83 of FIG. 4, the processor 1, in each group of each DCT block,
A magnification m is determined according to the normalization value rn calculated in the normalization processing S82. For example, the processor 1 sets the magnification m 1 for the first group, the magnification m 2 for the second group, the magnification m 2 for the third group, and Determine the magnification m 3 and the magnification m 4 for the fourth group.
 図7は、上記のような倍率m~mを正規化値rの関数として例示するグラフである。図7の実線はmを、一点鎖線はmを、破線はmを、点線はmを示している。図7のグラフに示したような正規化値rと倍率m~mとの関係は、テーブルとして記憶装置2に予め格納されてもよい。この場合、プロセッサ1は、記憶装置2に格納された正規化値rと倍率m~mとの関係を参照して、正規化値rの大きさに応じた倍率m~mを決定する。 FIG. 7 is a graph illustrating such scaling factors m 1 -m 4 as a function of normalization value r n . In FIG. 7, the solid line indicates m1 , the dashed line indicates m2 , the dashed line indicates m3, and the dotted line indicates m4 . The relationship between the normalized value r n and the magnifications m 1 to m 4 as shown in the graph of FIG. 7 may be stored in advance in the storage device 2 as a table. In this case, the processor 1 refers to the relationship between the normalization value r n and the magnifications m 1 to m 4 stored in the storage device 2, and calculates the magnifications m 1 to m according to the magnitude of the normalization value r n . 4 is determined.
 図4のステップS84において、プロセッサ1は、ステップS83で決定された倍率を対応するDCT係数に乗算する。 At step S84 in FIG. 4, the processor 1 multiplies the corresponding DCT coefficients by the scaling factors determined at step S83.
 図7に示した例では、第1群に対する倍率m及び第2群に対する倍率mは、1以上に設定され、第3群に対する倍率m及び第4群に対する倍率mは、1以下に設定されている。例えば、m>mであり、m>mである。このように、画像処理装置100は、画像11の構成に大きく寄与する第1群のDCT係数に対して最大の倍率mを乗算し、第1群ほどではないが画像11の構成に寄与する第2群のDCT係数に対して最大の倍率mを乗算する。一方、画像処理装置100は、画像11の構成に対する寄与が比較的小さい第3群、第4群のDCT係数に対しては、画像11の構成に寄与しないノイズとみなすことができるため、1以下の倍率を乗算して減衰させる。また、第1群に対する倍率m及び第2群に対する倍率mは、正規化値rが0より大きく1未満の範囲において最大値を取る。このようにすると、周波数成分が比較的少ないブロックの倍率が最大になる。そのため、深度のそれぞれ異なる位置に存在する対象物をそれぞれ類似した外見に補正することができる。 In the example shown in FIG. 7, the magnification m1 for the first group and the magnification m2 for the second group are set to 1 or more, and the magnification m3 for the third group and the magnification m4 for the fourth group are set to 1 or less. is set to For example, m 1 >m 2 and m 3 >m 4 . Thus, the image processing apparatus 100 multiplies the DCT coefficients of the first group, which contribute greatly to the construction of the image 11, by the maximum magnification m1, and contributes to the construction of the image 11 to a lesser extent than the first group. Multiply the DCT coefficients of the second group by the maximum scale factor m2 . On the other hand, the image processing apparatus 100 can regard the DCT coefficients of the third group and the fourth group, which contribute relatively little to the configuration of the image 11, as noise that does not contribute to the configuration of the image 11. attenuate by multiplying by a factor of . Further, the magnification m1 for the first group and the magnification m2 for the second group take maximum values in the range where the normalized value r n is greater than 0 and less than 1. In this way, blocks with relatively few frequency components are maximized. Therefore, objects existing at different depths can be corrected to have similar appearances.
 このように、画像処理装置100は、元画像データの構成に対する寄与の程度に応じて倍率を変化させることにより、出力する画像データにおいて、対象物を鮮明化することができる。例えば、生体を撮像した元画像データにおいては体表面の近くにある血管に比べて、深部にある血管が不鮮明である場合であっても、画像処理装置100によれば、出力される画像データにおいて、深部にある血管又は両血管を鮮明化することができる。 In this way, the image processing apparatus 100 can sharpen the object in the output image data by changing the magnification according to the degree of contribution to the configuration of the original image data. For example, in the original image data obtained by imaging a living body, even if deep blood vessels are unclear compared to blood vessels near the body surface, the image processing apparatus 100 can output , deep vessels or both.
[3.効果等]
 以上のように、画像処理装置100において、プロセッサ1は、画像11を複数のDCTブロックに分割し、複数のDCTブロックのそれぞれに対してDCTを実行する。プロセッサ1は、複数のDCTブロックのそれぞれについて算出された複数のDCT係数を用いて、各DCTブロックを代表する代表値を算出し、算出した少なくとも1つ以上の代表値に基づいて算出した値と、複数のDCT係数とに基づいて画像11の鮮明度を調整する。
[3. effects, etc.]
As described above, in the image processing apparatus 100, the processor 1 divides the image 11 into a plurality of DCT blocks and performs DCT on each of the plurality of DCT blocks. The processor 1 calculates a representative value representing each DCT block using a plurality of DCT coefficients calculated for each of the plurality of DCT blocks, and calculates a value calculated based on at least one or more of the calculated representative values. , and a plurality of DCT coefficients.
 この構成により、画像処理装置100は、画像11において特定の要素を他の要素に比べて強調することができる。 With this configuration, the image processing apparatus 100 can emphasize specific elements in the image 11 compared to other elements.
 プロセッサ1は、複数DCT係数を、絶対値に基づいて複数の群に分類し、分類された群毎に、算出した少なくとも1つ以上の代表値に基づいて算出した値と、複数のDCT係数とに基づいて画像11の鮮明度を調整してもよい。例えば、プロセッサ1は、分類された群のそれぞれにおける複数のDCT係数の各絶対値に対して、互いに異なる倍率を乗じてもよい。 A processor 1 classifies a plurality of DCT coefficients into a plurality of groups based on absolute values, and for each classified group, a value calculated based on at least one or more calculated representative values, a plurality of DCT coefficients, and The sharpness of image 11 may be adjusted based on . For example, processor 1 may multiply each absolute value of a plurality of DCT coefficients in each of the sorted groups by different scale factors.
 この構成により、群毎に倍率を決定し、各群が画像11の構成に寄与する程度に応じて、DCTブロックの各DCT係数を強調することができる。 With this configuration, it is possible to determine the magnification for each group and emphasize each DCT coefficient of the DCT block according to the extent to which each group contributes to the configuration of the image 11 .
 プロセッサ1は、複数の群のそれぞれにおけるDCT係数の絶対値に基づいて、複数の群のそれぞれにおける倍率を決定してもよい。 The processor 1 may determine the scaling factor in each of the multiple groups based on the absolute values of the DCT coefficients in each of the multiple groups.
 この構成により、各群が画像11の構成に寄与する程度に応じて、DCTブロックの各DCT係数を強調することができる。 With this configuration, each DCT coefficient of the DCT block can be emphasized according to the extent to which each group contributes to the configuration of the image 11.
 プロセッサ1は、代表値の中から所定の代表値を選択し、各領域を代表する代表値を、前記所定の代表値に基づいて正規化し、正規化された代表値に代表されるDCTブロックに含まれるDCTと、当該正規化された代表値に応じた値とに基づいて、前記画像の鮮明度を調整しもよい。 The processor 1 selects a predetermined representative value from the representative values, normalizes the representative value representing each region based on the predetermined representative value, and converts the DCT block represented by the normalized representative value into a DCT block. The sharpness of the image may be adjusted based on the included DCT and the value according to the normalized representative value.
 この構成により、様々な画像11に対応可能な倍率を設定することができる。 With this configuration, it is possible to set magnifications that can correspond to various images 11 .
 具体的には、プロセッサ1は、複数のDCT係数を、絶対値の大きいものから順に複数の群に分類してもよい(ステップS81)。この場合において、プロセッサ1は、複数の群のうち複数のDCT係数の絶対値が最も大きい群におけるDCT係数の絶対値の和を、最大代表値rmaxとして算出し、算出された最大代表値rmaxに基づいて、複数のDCTブロックのそれぞれにおけるDCT係数を正規化する(ステップS82)。プロセッサ1は、正規化されたDCT係数に対して、その絶対値に応じた倍率を乗じる(ステップS83,S84)。 Specifically, processor 1 may classify a plurality of DCT coefficients into a plurality of groups in descending order of absolute value (step S81). In this case, the processor 1 calculates the sum of the absolute values of the DCT coefficients in the group having the largest absolute value of the plurality of DCT coefficients among the plurality of groups as the maximum representative value r max , and calculates the calculated maximum representative value r Based on max , the DCT coefficients in each of the plurality of DCT blocks are normalized (step S82). Processor 1 multiplies the normalized DCT coefficients by a scale factor corresponding to their absolute values (steps S83 and S84).
 画像処理装置100は、画像11に映る対象物を鮮明化する画像鮮明化処理を実行するものであってもよい。このような画像処理装置100において、プロセッサ1は、画像11を複数のDCTブロックに分割する前に、対象物の幅が各DCTブロック内に包含される幅となるように、画像11のサイズ及び複数のDCTブロックのサイズのうちの少なくとも一方を調整する。 The image processing device 100 may execute image sharpening processing for sharpening the object appearing in the image 11 . In such an image processing apparatus 100, the processor 1, before dividing the image 11 into a plurality of DCT blocks, processes the size and size of the image 11 so that the width of the object is the width contained within each DCT block. Adjust at least one of the sizes of the plurality of DCT blocks.
 この構成により、対象物の幅がDCTブロック内に収まるため、DCTブロック内において対象物を鮮明化することができる。 With this configuration, the width of the object fits within the DCT block, so the object can be sharpened within the DCT block.
(他の実施形態)
 以上のように、本出願において開示する技術の例示として、上記実施形態を説明した。しかしながら、本開示における技術は、これに限定されず、適宜、変更、置き換え、付加、省略などを行った実施形態にも適用可能である。そこで、以下、他の実施形態を例示する。
(Other embodiments)
As described above, the above embodiments have been described as examples of the technology disclosed in the present application. However, the technology in the present disclosure is not limited to this, and can also be applied to embodiments in which modifications, replacements, additions, omissions, etc. are made as appropriate. Therefore, other embodiments will be exemplified below.
 上記実施形態では、DCT係数強調処理S8において、プロセッサ1が、群分け処理S81を実行した後、正規化処理S82を経て倍率mを決定する(ステップS83)例を説明した。しかしながら、本開示はこれに限定されず、プロセッサ1は、群分け処理S81を実行した後、DCTブロックの代表値rが所定の閾値より大きい場合に、1より大きい倍率を乗算するなどして当該DCT係数の絶対値を増加させてもよい。この構成によれば、画像処理装置100は、画像11において、DCTブロックの複数のDCT係数のうち、絶対値が比較的大きいもののみを強調することができる。 In the above embodiment, in the DCT coefficient enhancement process S8, the processor 1 executes the grouping process S81, and then performs the normalization process S82 to determine the magnification m (step S83). However, the present disclosure is not limited to this, and after executing the grouping process S81, if the representative value r of the DCT block is larger than a predetermined threshold, the processor 1 multiplies the relevant value by a factor larger than 1. The absolute value of the DCT coefficients may be increased. According to this configuration, the image processing apparatus 100 can emphasize only the DCT coefficients with relatively large absolute values among the plurality of DCT coefficients of the DCT block in the image 11 .
 上記実施形態では、DCT係数強調処理S8において、プロセッサ1が倍率mを決定する(ステップS83)例を説明した。しかしながら、本開示はこれに限定されず、プロセッサ1が群分け処理S81を実行した後、図4のステップS82,S83に代えて、ユーザが手動で倍率mを決定してもよい。ステップS84において、プロセッサ1は、ユーザによって決定された倍率mを対応するDCT係数に乗算する。例えば、ユーザは、グラフィカルユーザインタフェース(GUI)を用いて、倍率mを決定する。ユーザは、倍率mの乗算処理S84及びIDCT処理S9を経た画像11を、ディスプレイ12等を用いて確認しながら、GUIを用いて倍率mを調整してもよい。 In the above embodiment, an example was explained in which the processor 1 determines the magnification m (step S83) in the DCT coefficient enhancement processing S8. However, the present disclosure is not limited to this, and after the processor 1 executes the grouping process S81, instead of steps S82 and S83 in FIG. 4, the user may manually determine the magnification m. At step S84, processor 1 multiplies the corresponding DCT coefficients by the user-determined scaling factor m. For example, a user determines the scale factor m using a graphical user interface (GUI). The user may adjust the magnification m using the GUI while checking the image 11 that has undergone the multiplication processing S84 of the magnification m and the IDCT processing S9 using the display 12 or the like.
 上記実施形態では、遮蔽物に覆われた対象物を特定波長を有する光を用いて撮像して生成された画像データ11について説明した。しかしながら、画像処理装置100が処理可能な画像データ11は、光学的に撮像されたものに限定されず、超音波画像、磁気共鳴画像(Magnetic Resonance Imaging、MRI)等であってもよい。 In the above embodiment, image data 11 generated by imaging an object covered with a shield using light having a specific wavelength has been described. However, the image data 11 that can be processed by the image processing apparatus 100 is not limited to optically captured data, and may be ultrasound images, magnetic resonance imaging (MRI), or the like.
 上記実施形態では、直交変換の一例として、DCTについて説明したが、直交変換はこれに限定されず、例えば、離散フーリエ変換(Discrete Fourier Transform、DFT)であってもよい。DFTは、高速フーリエ変換(Fast Fourier Transform、FFT)を含む。 In the above embodiment, DCT was described as an example of orthogonal transform, but orthogonal transform is not limited to this, and may be discrete Fourier transform (DFT), for example. DFT includes Fast Fourier Transform (FFT).
 本開示は、画像処理技術に適用可能である。 The present disclosure is applicable to image processing technology.
 1 プロセッサ
 2 記憶装置
 3 入力インタフェース
 4 出力インタフェース
 11 画像データ
 12 ディスプレイ
 13 ネットワーク
 21 プログラム
 100 画像処理装置
1 Processor 2 Storage Device 3 Input Interface 4 Output Interface 11 Image Data 12 Display 13 Network 21 Program 100 Image Processing Apparatus

Claims (12)

  1.  プロセッサと、画像のデータを記憶した記憶装置とを備える画像処理装置であって、
     前記プロセッサは、
     前記画像を複数の領域に分割し、
     前記複数の領域のそれぞれに対して直交変換を実行し、
     前記複数の領域のそれぞれについて算出された複数の直交変換係数を用いて、各領域を代表する代表値を算出し、
     前記算出した少なくとも1つ以上の代表値に基づいて算出した値と、前記複数の直交変換係数とに基づいて前記画像の鮮明度を調整する、
    画像処理装置。
    An image processing device comprising a processor and a storage device storing image data,
    The processor
    dividing the image into a plurality of regions;
    performing an orthogonal transform on each of the plurality of regions;
    calculating a representative value representing each region using a plurality of orthogonal transform coefficients calculated for each of the plurality of regions;
    adjusting the sharpness of the image based on the values calculated based on the calculated at least one or more representative values and the plurality of orthogonal transform coefficients;
    Image processing device.
  2.  前記プロセッサは、前記複数の領域のそれぞれにおける前記代表値が所定の閾値より大きい場合に、前記複数の直交変換係数の絶対値を増加させる、請求項1に記載の画像処理装置。 The image processing device according to claim 1, wherein said processor increases absolute values of said plurality of orthogonal transform coefficients when said representative value in each of said plurality of regions is larger than a predetermined threshold.
  3.  前記プロセッサは、
     前記複数の直交変換係数を、絶対値に基づいて複数の群に分類し、
     前記分類された群毎に、前記算出した少なくとも1つ以上の代表値に基づいて算出した値と、前記複数の直交変換係数とに基づいて前記画像の鮮明度を調整する、
     請求項1に記載の画像処理装置。
    The processor
    classifying the plurality of orthogonal transform coefficients into a plurality of groups based on absolute values;
    adjusting the sharpness of the image based on the values calculated based on the calculated at least one or more representative values and the plurality of orthogonal transform coefficients for each of the classified groups;
    The image processing apparatus according to claim 1.
  4.  前記プロセッサは、
     前記代表値の中から所定の代表値を選択し、
     前記各領域を代表する代表値を、前記所定の代表値に基づいて正規化し、
     正規化された前記代表値に代表される前記領域に含まれる前記直交変換係数と、当該正規化された代表値に応じた値とに基づいて、前記画像の鮮明度を調整する、
     請求項1に記載の画像処理装置。
    The processor
    selecting a predetermined representative value from the representative values;
    normalizing the representative value representing each region based on the predetermined representative value;
    Adjusting the sharpness of the image based on the orthogonal transform coefficients included in the region represented by the normalized representative value and a value corresponding to the normalized representative value;
    The image processing apparatus according to claim 1.
  5.  前記プロセッサは、
     前記複数の直交変換係数を、絶対値の大きさに応じて順に複数の群に分類し、
     前記複数の群のうち前記複数の直交変換係数の絶対値が最も大きい群における前記複数の直交変換係数の絶対値の和を、前記代表値として算出し、
     算出された前記代表値に基づいて、前記複数の領域のそれぞれにおける前記直交変換係数を正規化し、
     正規化された前記複数の直交変換係数に対して、その絶対値に応じた倍率を乗じる、
     請求項1に記載の画像処理装置。
    The processor
    classifying the plurality of orthogonal transform coefficients into a plurality of groups in order according to the magnitude of the absolute value;
    calculating, as the representative value, the sum of the absolute values of the plurality of orthogonal transform coefficients in the group having the largest absolute value of the plurality of orthogonal transform coefficients among the plurality of groups;
    normalizing the orthogonal transform coefficients in each of the plurality of regions based on the calculated representative value;
    multiplying the plurality of normalized orthogonal transform coefficients by a scale factor according to their absolute values;
    The image processing apparatus according to claim 1.
  6.  前記画像処理装置は、前記画像に映る対象物を鮮明化する画像鮮明化処理を実行し、
     前記プロセッサは、前記画像を複数の領域に分割する前に、前記対象物の幅が各領域内に包含されるように、前記画像のサイズ及び前記複数の領域のサイズのうちの少なくとも一方を調整する、
     請求項1に記載の画像処理装置。
    The image processing device executes an image sharpening process for sharpening an object appearing in the image,
    The processor adjusts at least one of the size of the image and the sizes of the plurality of regions such that the width of the object is contained within each region before dividing the image into regions. do,
    The image processing apparatus according to claim 1.
  7.  プロセッサによって、記憶装置に記憶された画像のデータに対して画像処理を実行する画像処理方法であって、
     前記画像を複数の領域に分割し、
     前記複数の領域のそれぞれに対して直交変換を実行し、
     前記複数の領域のそれぞれについて算出された複数の直交変換係数を用いて、各領域を代表する代表値を算出し、
     前記算出した少なくとも1つ以上の代表値に基づいて算出した値と、前記複数の直交変換係数とに基づいて前記画像の鮮明度を調整することを含む、
    画像処理方法。
    An image processing method for performing image processing on image data stored in a storage device by a processor,
    dividing the image into a plurality of regions;
    performing an orthogonal transform on each of the plurality of regions;
    calculating a representative value representing each region using a plurality of orthogonal transform coefficients calculated for each of the plurality of regions;
    A value calculated based on the calculated at least one or more representative values, and adjusting the sharpness of the image based on the plurality of orthogonal transform coefficients.
    Image processing method.
  8.  前記複数の領域のそれぞれにおける前記代表値が所定の閾値より大きい場合に、前記複数の直交変換係数の絶対値を増加させることを含む、請求項7に記載の画像処理方法。 The image processing method according to claim 7, comprising increasing absolute values of the plurality of orthogonal transform coefficients when the representative value in each of the plurality of regions is greater than a predetermined threshold.
  9.  前記複数の直交変換係数を、絶対値に基づいて複数の群に分類し、
     前記分類された群毎に、前記算出した少なくとも1つ以上の代表値に基づいて算出した値と、前記複数の直交変換係数とに基づいて前記画像の鮮明度を調整することを含む、
     請求項7に記載の画像処理方法。
    classifying the plurality of orthogonal transform coefficients into a plurality of groups based on absolute values;
    Adjusting the sharpness of the image based on the values calculated based on the calculated at least one or more representative values and the plurality of orthogonal transform coefficients for each of the classified groups,
    The image processing method according to claim 7.
  10.  前記代表値の中から所定の代表値を選択し、
     前記各領域を代表する代表値を、前記所定の代表値に基づいて正規化し、
     正規化された前記代表値に代表される前記領域に含まれる前記直交変換係数と、当該正規化された代表値に応じた値とに基づいて、前記画像の鮮明度を調整することを含む、
     請求項7に記載の画像処理方法。
    selecting a predetermined representative value from the representative values;
    normalizing the representative value representing each region based on the predetermined representative value;
    Adjusting the sharpness of the image based on the orthogonal transform coefficients included in the region represented by the normalized representative value and the value corresponding to the normalized representative value,
    The image processing method according to claim 7.
  11.  前記複数の直交変換係数を、絶対値の大きさに応じて順に複数の群に分類し、
     前記複数の群のうち前記複数の直交変換係数の絶対値が最も大きい群における前記複数の直交変換係数の絶対値の和を、前記代表値として算出し、
     算出された前記代表値に基づいて、前記複数の領域のそれぞれにおける前記直交変換係数を正規化し、
     正規化された前記複数の直交変換係数に対して、その絶対値に応じた倍率を乗じることを含む、
     請求項7に記載の画像処理方法。
    classifying the plurality of orthogonal transform coefficients into a plurality of groups in order according to the magnitude of the absolute value;
    calculating, as the representative value, the sum of the absolute values of the plurality of orthogonal transform coefficients in the group having the largest absolute value of the plurality of orthogonal transform coefficients among the plurality of groups;
    normalizing the orthogonal transform coefficients in each of the plurality of regions based on the calculated representative value;
    Multiplying the plurality of normalized orthogonal transform coefficients by a scale factor according to their absolute values,
    The image processing method according to claim 7.
  12.  前記画像処理方法は、前記画像に映る対象物を鮮明化する画像鮮明化処理方法であり、
     前記画像を複数の領域に分割する前に、前記対象物の幅が各領域内に包含されるように、前記画像のサイズ及び前記複数の領域のサイズのうちの少なくとも一方を調整することを含む、
     請求項7に記載の画像処理方法。
    The image processing method is an image sharpening processing method for sharpening an object appearing in the image,
    adjusting at least one of the size of the image and the size of the plurality of regions so that the width of the object is contained within each region before dividing the image into regions. ,
    The image processing method according to claim 7.
PCT/JP2022/022826 2021-06-21 2022-06-06 Image processing device, and image processing method WO2022270291A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-102716 2021-06-21
JP2021102716A JP2023001783A (en) 2021-06-21 2021-06-21 Image processing device and image processing method

Publications (1)

Publication Number Publication Date
WO2022270291A1 true WO2022270291A1 (en) 2022-12-29

Family

ID=84544548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/022826 WO2022270291A1 (en) 2021-06-21 2022-06-06 Image processing device, and image processing method

Country Status (2)

Country Link
JP (1) JP2023001783A (en)
WO (1) WO2022270291A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004201290A (en) * 2002-12-18 2004-07-15 Sharp Corp Image processing method, image processing device, image processing program, and record medium
JP2007087190A (en) * 2005-09-22 2007-04-05 Fuji Xerox Co Ltd Image processing device and program
JP2018023602A (en) * 2016-08-10 2018-02-15 大日本印刷株式会社 Fundus image processing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004201290A (en) * 2002-12-18 2004-07-15 Sharp Corp Image processing method, image processing device, image processing program, and record medium
JP2007087190A (en) * 2005-09-22 2007-04-05 Fuji Xerox Co Ltd Image processing device and program
JP2018023602A (en) * 2016-08-10 2018-02-15 大日本印刷株式会社 Fundus image processing device

Also Published As

Publication number Publication date
JP2023001783A (en) 2023-01-06

Similar Documents

Publication Publication Date Title
CN108921800B (en) Non-local mean denoising method based on shape self-adaptive search window
Kim et al. Image contrast enhancement using entropy scaling in wavelet domain
Sakellaropoulos et al. A wavelet-based spatially adaptive method for mammographic contrast enhancement
Lai et al. Improved local histogram equalization with gradient-based weighting process for edge preservation
Kanwal et al. Region based adaptive contrast enhancement of medical X-ray images
Hossain et al. Medical image enhancement based on nonlinear technique and logarithmic transform coefficient histogram matching
Khan et al. Contrast enhancement of low-contrast medical images using modified contrast limited adaptive histogram equalization
CN115578284A (en) Multi-scene image enhancement method and system
Ittannavar et al. Comparative study of mammogram enhancement techniques for early detection of breast cancer
Rao et al. Retinex-centered contrast enhancement method for histopathology images with weighted CLAHE
Al-Ameen Contrast Enhancement of Digital Images Using an Improved Type-II Fuzzy Set-Based Algorithm.
CN108961179B (en) Medical image post-processing system and using method thereof
Kumar et al. Detection of microcalcification using the wavelet based adaptive sigmoid function and neural network
Saravanan et al. A fuzzy and spline based dynamic histogram equalization for contrast enhancement of brain images
WO2022270291A1 (en) Image processing device, and image processing method
JP2005530261A (en) Improvements in or related to image processing
CN114332255B (en) Medical image processing method and device
CN116563166A (en) Image enhancement method, device, storage medium and equipment
Omarova et al. Application of the Clahe method contrast enhancement of X-Ray Images
Rao et al. An effective CT medical image enhancement system based on DT-CWT and adaptable morphology
Kumar et al. Automatic tissue attenuation-based contrast enhancement of low-dynamic X-Ray images
Harine et al. Fundus Image Enhancement Using Hybrid Deep Learning Approaches
Jung et al. Sharpening dermatological color images in the wavelet domain
Teh et al. Contrast enhancement of CT brain images using gamma correction adaptive extreme-level eliminating with weighting distribution
Storozhilova et al. 2.5 D extension of neighborhood filters for noise reduction in 3D medical CT images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22828208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE