US20180309933A1 - Image processing device, image processing method, program, and endoscope device - Google Patents
Image processing device, image processing method, program, and endoscope device Download PDFInfo
- Publication number
- US20180309933A1 US20180309933A1 US16/020,756 US201816020756A US2018309933A1 US 20180309933 A1 US20180309933 A1 US 20180309933A1 US 201816020756 A US201816020756 A US 201816020756A US 2018309933 A1 US2018309933 A1 US 2018309933A1
- Authority
- US
- United States
- Prior art keywords
- frame
- ordinary
- unit
- special
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 47
- 238000003672 processing method Methods 0.000 title description 5
- 239000013598 vector Substances 0.000 claims abstract description 110
- 238000012937 correction Methods 0.000 claims abstract description 99
- 238000000034 method Methods 0.000 claims abstract description 85
- 230000008569 process Effects 0.000 claims abstract description 81
- 238000003384 imaging method Methods 0.000 claims abstract description 53
- 238000001514 detection method Methods 0.000 claims abstract description 24
- 230000015654 memory Effects 0.000 description 36
- 238000000605 extraction Methods 0.000 description 26
- 239000002131 composite material Substances 0.000 description 14
- 210000004204 blood vessel Anatomy 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 8
- 206010028980 Neoplasm Diseases 0.000 description 6
- 230000003902 lesion Effects 0.000 description 6
- 230000001678 irradiating effect Effects 0.000 description 5
- 239000006185 dispersion Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000270295 Serpentes Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- H04N5/23267—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G06K9/2018—
-
- G06K9/2027—
-
- G06T5/003—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/683—Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
-
- H04N2005/2255—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/555—Constructional details for picking-up images in sites, inaccessible due to their dimensions or hazardous conditions, e.g. endoscopes or borescopes
Definitions
- imaging an ordinary image and a special image using time division is described in Japanese Unexamined Patent Application Publication No. 2007-313171.
- performing composite display of the ordinary image and the special image is described in Japanese Unexamined Patent Application Publication No. 2012-24283.
- an image processing device which includes an input unit which inputs ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- the image processing device may further include a feature extraction process unit which generates a feature extraction frame by subjecting the special frame to a feature extraction process
- the motion correction unit may further subject the feature extraction frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors
- the compositing unit may subject the ordinary frames to the image compositing process based on the feature extraction frame that is subjected to motion correction.
- the compositing unit may subject the ordinary frames to a superposing compositing process or a marking compositing process as the image compositing process.
- the compositing unit may add the motion-corrected special frame to the ordinary frames according to the motion-corrected feature extraction frame.
- the compositing unit may subject the ordinary frames to a color conversion process according to the motion-corrected feature extraction frame.
- the compositing unit may subject the ordinary frames to a superposing compositing process or a marking compositing process as the image compositing process according to a selection by a user.
- the image processing device may further include a motion vector correction unit which corrects the detected motion vectors based on the plurality of motion vectors that are consecutively detected.
- an image processing method performed by an image processing device.
- the method includes inputting ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; detecting motion vectors of the object from a plurality of the ordinary frames with different imaging timing; subjecting the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and subjecting the ordinary frames to an image compositing process based on the special frame.
- ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period are input; motion vectors of the object from a plurality of the ordinary frames with different imaging timing are detected; the special frame is subjected to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and the ordinary frames are subjected to an image compositing process based on the special frame.
- an endoscope device which includes a light source unit which irradiates an object with ordinary light or special light; an imaging unit which consecutively images, at a predetermined ratio according to a predetermined frame period, ordinary frames in a state in which the object is irradiated with the ordinary light, and a special frame in a state in which the object is irradiated with the special light; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- an object is irradiated with ordinary light or special light; ordinary frames in a state in which the object is irradiated with the ordinary light, and a special frame in a state in which the object is irradiated with the special light are consecutively imaged at a predetermined ratio according to a predetermined frame period; motion vectors of the object are detected from a plurality of the ordinary frames with different imaging timing; the special frame is subjected to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and the ordinary frames are subjected to an image compositing process based on the special frame.
- FIG. 1 is a block diagram illustrating a configuration example of an endoscope device to which an embodiment of the present disclosure is applied;
- FIG. 2 is a diagram illustrating imaging timing between ordinary frames and special frames
- FIG. 3 is a block diagram illustrating a detailed configuration example of an image processing unit of FIG. 1 ;
- FIG. 4 is a block diagram illustrating a detailed configuration example of a motion vector detection unit of FIG. 3 ;
- FIG. 5 is a flowchart illustrating an image compositing process
- FIG. 6 is a diagram illustrating an example of motion correction amount estimation
- FIG. 7 is a diagram illustrating an impression of correcting dispersion in motion vectors based on a series of motion vectors
- FIG. 8 is a diagram illustrating an impression of a superposing compositing process.
- FIG. 9 is a block diagram illustrating a configuration example of a computer.
- FIG. 1 illustrates a configuration example of an endoscope device, which is an embodiment of the present disclosure, that images an ordinary frame and a special frame using time division, accurately aligns and combines the frames, and displays a composite frame that is obtained as a result.
- An endoscope device 10 is configured to include a light source unit 11 , an imaging unit 12 , a developing unit 13 , an image processing unit 14 , and a display unit 15 .
- the light source unit 11 switches between ordinary light such as white light and special light that has a predetermined wavelength for each frame that is imaged, and irradiates the object (an organ or the like in the body) therewith.
- the light source unit 11 outputs an irradiation identification signal indicating which of the ordinary light and the special light the object is irradiated with to the image processing unit 14 for each frame that is imaged.
- an optical filter which transmits only a predetermined wavelength may be provided in the light path of the ordinary light.
- the imaging unit 12 images the object in a state in which the ordinary light or the special light is radiated from the light source unit 11 , and outputs an image signal that is obtained as a result to the developing unit 13 .
- the developing unit 13 subjects the image signal that is input thereto from the imaging unit 12 to a developing process such as a mosaic process, and outputs the image signal resulting from the process (the ordinary frame when the ordinary light is radiated, and the special frame when the special light is radiated) to the image processing unit 14 .
- the blood vessel or the lesion such as a tumor is made clearer in comparison to in an ordinary frame; however, in contrast, the brightness of the entire frame is dark and there is much noise.
- the entire frame is bright in comparison to the special frame, and there is little noise; however, in contrast, it is difficult to distinguish the blood vessel or the lesion such as a tumor.
- the image processing unit 14 detects motion vectors using two ordinary frames with different imaging timing. By subjecting the special frame to a differential filter process, the image processing unit 14 generates a frame (hereinafter referred to as a differential filter frame) in which edge portions (specifically, the contours or the like of the blood vessel or the lesion, for example) within the image are emphasized. Furthermore, the image processing unit 14 performs motion correction on each of the special frame and the differential filter frame based on the motion vectors that are detected from the ordinary frame, combines the ordinary frame with the special frame and the differential filter frame which are subjected to the motion correction, and outputs the composite frame that is obtained as a result to the display unit 15 .
- a differential filter frame a frame in which edge portions (specifically, the contours or the like of the blood vessel or the lesion, for example) within the image are emphasized. Furthermore, the image processing unit 14 performs motion correction on each of the special frame and the differential filter frame based on the motion vectors that are detected from the ordinary frame, combines the ordinary
- the display unit 15 displays the composite frame.
- FIG. 2 An example of the imaging timing of the ordinary frames and the special frames is illustrated in FIG. 2 .
- ordinary frames are imaged for several continuous frames, and a special frame is imaged periodically between the ordinary frames.
- a special frame is imaged periodically between the ordinary frames.
- the imaging ratio of ordinary frames to special frames is set to 4:1.
- Ta illustrates a timing at which an ordinary frame is imaged one frame before a special frame is imaged
- Tb illustrates a timing at which a special frame is imaged
- Tc, Td, and Te illustrate timings at which ordinary frames are imaged 1, 2, and 3 frames, respectively, after the special frame is imaged.
- Ta to Te will be used in the description of the detection of motion vectors described later.
- FIG. 3 a configuration example of the image processing unit 14 is illustrated in FIG. 3 .
- the image processing unit 14 is configured to include a switching unit 21 , a motion vector detection unit 22 , a correction amount estimation unit 23 , a frame memory 24 , a differential filter processing unit 25 , a motion correction unit 26 , and a compositing unit 27 .
- the ordinary frames and the special frames that are input thereto from the developing unit 13 of the previous stage are input to the switching unit 21 , and the irradiation identification signal from the light source unit 11 is input to the switching unit 21 , the motion vector detection unit 22 , and the correction amount estimation unit 23 .
- the switching unit 21 determines whether or not the input from the developing unit 13 is a special frame based on the irradiation identification signal, when the input is not a special frame (is an ordinary frame), outputs the ordinary frame to the motion vector detection unit 22 and the compositing unit 27 , and when the input is a special frame, the special frame is output to the frame memory 24 .
- the motion vector detection unit 22 For each frame period, the motion vector detection unit 22 detects the motion vectors using two ordinary frames with different imaging timing, and outputs the detected motion vectors to the correction amount estimation unit 23 .
- the correction amount estimation unit 23 estimates the motion correction amounts of the special frame and the differential filter frame based on the motion vectors that are detected by the motion vector detection unit 22 , and outputs the estimated motion correction amounts to the motion correction unit 26 .
- the correction amount estimation unit 23 is capable of correcting motion vectors which may be erroneously detected based on the motion vectors that are detected in succession, and is capable of estimating the motion correction amounts based on the corrected motion vectors.
- the frame memory 24 holds the special frame that is input thereto from the switching unit 21 , and supplies the held special frame to the differential filter processing unit 25 and the motion correction unit 26 for each frame period.
- the frame memory 24 updates the held special frame when the next special frame is input thereto from the switching unit 21 .
- the differential filter processing unit 25 generates a feature extraction frame in which features in the image are emphasized by subjecting the special frame that is supplied thereto from the frame memory 24 to a differential filter process (for example, the Sobel filter process), and outputs the feature extraction frame to the motion correction unit 26 .
- a differential filter frame in which the edge portions are emphasized is generated as the feature extraction frame.
- the differential filter processing unit 25 may omit the differential filter process and output the result of the previous differential filter process to the motion correction unit 26 .
- a process may be executed in which a region in which the variance or the dynamic range in a micro-block (of 3 ⁇ 3 pixels, for example) is greater than or equal to a threshold that is extracted, and a feature extraction frame indicating the extraction results is generated.
- a process may be executed in which a region in which the signal levels of pixels are within a specific threshold, that is, a region with specific RGB levels is extracted, and a feature extraction frame indicating the extraction results is generated.
- a closed region (corresponding to a tumor or the like) may be subjected to a contour detection process such as snakes, and a feature extraction frame indicating the results may be generated.
- the motion correction unit 26 subjects the special frame from the frame memory 24 to the motion correction based on the motion correction amounts that are input from the correction amount estimation unit 23 , subjects the differential filter frame from the differential filter processing unit 25 to the motion correction, and outputs the post-motion correction special frame and differential filter frame to the compositing unit 27 .
- the compositing unit 27 includes a superposing unit 28 and a marking unit 29 , using the ordinary frame and the post-motion correction special frame and differential filter frame as the input, generates a composite frame by performing a superposing compositing process by the superposing unit 28 or a marking compositing process by the marking unit 29 , and outputs the composite frame to the display unit 15 of the subsequent stage.
- the motion vector detection unit 22 is configured to include frame memories 31 and 32 , a frame selection unit 33 , a block matching unit 34 , and a vector correction unit 35 .
- the ordinary frame that is input thereto from the switching unit 21 of the previous stage is input to the frame memory 31 and the frame selection unit 33 .
- the frame memory 31 For each frame period, the frame memory 31 outputs the ordinary frame that is held therein until that point to the frame memory 32 and the frame selection unit 33 , and updates the data held therein until that point with the ordinary frame that is input from the switching unit 21 of the previous stage. In the same manner, for each frame period, the frame memory 32 outputs the ordinary frame that is held therein to the frame selection unit 33 , and updates the data held therein with the ordinary frame that is input from the frame memory 31 of the previous stage.
- the frame memory 31 outputs the ordinary frame that is held until that point to the subsequent stage, and clears the data that is held until that point.
- the frame memory 31 since the frame memory 31 is not holding any data, the output to the subsequent stage is not performed.
- the frame memory 32 outputs the ordinary frame that is held until that point to the subsequent stage, and clears the data that is held until that point.
- the two ordinary frames are output to the block matching unit 34 .
- the two ordinary frames that are input from the frame memories 31 and 32 are output to the block matching unit 34 .
- the block matching unit 34 detects the motion vectors between the two ordinary frames using a block matching process.
- the vector correction unit 35 determines the relationship between the two ordinary frames that are used for the motion vectors based on the irradiation identification signal, corrects the detected motion vectors based on the relationship, and outputs the motion vectors to the correction amount estimation unit 23 .
- the vector correction unit 35 does not perform the motion vector correction.
- the frame memory 31 does not perform output.
- the ordinary frame from the frame memory 32 that is one frame prior to the reference, and the ordinary frame from the switching unit 21 that is one frame after the reference are input to the frame selection unit 33 , and the motion vectors are detected from the two ordinary frames.
- the vector correction unit 35 multiplies each of the vertical and horizontal components of the detected motion vectors by 1 ⁇ 2.
- the reference imaging timing is the Tc illustrated in FIG. 2
- the reference ordinary frame from the frame memory 31 and the ordinary frame from the switching unit 21 that is one frame after the reference are input to the frame selection unit 33 , and the motion vectors are detected from the two ordinary frames.
- the vector correction unit 35 multiplies each of the vertical and horizontal components of the detected motion vectors by ⁇ 1.
- the ordinary frame from the frame memory 32 that is one frame prior to the reference, the reference ordinary frame from the frame memory 31 , and the ordinary frame from the switching unit 21 that is one frame after the reference are input to the frame selection unit 33 , and the motion vectors are detected from the two ordinary frames from the frame memories 31 and 32 .
- the vector correction unit 35 does not perform the motion vector correction.
- the ordinary frame from the frame memory 32 that is one frame prior to the reference, the reference ordinary frame from the frame memory 31 , and the ordinary frame from the switching unit 21 that is one frame after the reference are input to the frame selection unit 33 , and the motion vectors are detected from the two ordinary frames from the frame memories 31 and 32 .
- the vector correction unit 35 does not perform the motion vector correction.
- the motion vectors that are corrected as described above are output from the vector correction unit 35 to the correction amount estimation unit 23 of the subsequent state.
- FIG. 5 is a flowchart illustrating an image compositing process. The image compositing process is executed for each frame period.
- step S 1 the switching unit 21 determines whether or not the input from the developing unit 13 is a special frame based on the irradiation identification signal, and when the input is a special frame, the special frame is output to the frame memory 24 . Conversely, when it is determined that the input is not a special frame (is an ordinary frame), the switching unit 21 outputs the ordinary frame to the motion vector detection unit 22 and the compositing unit 27 .
- step S 2 the frame memory 24 supplies the special frame that is held until that point to the differential filter processing unit 25 and the motion correction unit 26 . Note that, the frame memory 24 updates the held special frame when the special frame is input thereto from the switching unit 21 .
- step S 3 the differential filter processing unit 25 generates a differential filter frame in which edge portions in the image are emphasized by subjecting the special frame that is supplied thereto from the frame memory 24 to a differential filter process (for example, the Sobel filter process such as the one illustrated in the following equation (1)), and outputs the differential filter frame to the motion correction unit 26 .
- a differential filter process for example, the Sobel filter process such as the one illustrated in the following equation (1)
- Sobel Rh ( x,y )
- Sobel R ( x,y ) Sobel Rh ( x,y )+Sobel Rv ( x,y )
- Sobel Gh ( x,y )
- Sobel G ( x,y ) Sobel Gh ( x,y )+Sobel Gv ( x,y )
- Sobel Bh ( x,y )
- Sobel Bv ( x,y )
- Sobel B ( x,y ) Sobel Bh ( x,y )+Sobel Bv ( x,y )
- R, G, and B in the equation (1) respectively correspond to levels in the R, G, and B planes of the special frame.
- step S 4 the motion vector detection unit 22 detects the motion vectors using two ordinary frames with different imaging timing, and outputs the motion vectors to the correction amount estimation unit 23 .
- step S 5 the correction amount estimation unit 3 determines whether or not the detected motion vectors are less than or equal to a predetermined threshold, and when the detected motion vectors are less than or equal to the predetermined threshold, the process proceeds to step S 6 in order to use the motion vectors in the motion correction. Conversely, when the detected motion vectors are greater than the predetermined threshold, the motion vectors are not used in the motion correction. In this case, the image compositing process that corresponds to the present imaging timing ends.
- step S 6 the correction amount estimation unit 23 estimates the motion correction amounts of the special frame and the differential filter frame based on the motion vectors that are detected by the motion vector detection unit 22 , and outputs the estimated motion correction amounts to the motion correction unit 26 .
- the motion correction amounts H x and H y are computed as illustrated in the following equation (2).
- V x and V y are motion vectors that are detected and corrected
- correction amount estimation unit 23 it is also possible to correct the dispersion in the motion vectors and subsequently estimate the motion correction amounts based on the series of motion vectors, as described hereinafter, as another motion correction amount estimation method.
- FIG. 6 is a diagram illustrating a process flow in which the dispersion in the motion vectors is corrected and the motion correction amounts are subsequently estimated based on the series of motion vectors.
- FIG. 7 illustrates an impression of correcting the dispersion in the motion vectors based on the series of motion vectors.
- the motion vectors (V′ x,t , and V′ y,t ) in relation to the imaging timing t are estimated as illustrated in the following equation (3).
- V′ x,t a x t 3 +b x t 2 +c x t+d x
- V′ y,t a y t 3 +b y t 2 +c y t+d y (3)
- the motion correction amounts H x and H y are computed using the following equation (4) by substituting the motion vectors of the equation (2) with the estimated motion vectors (V′ x,t and V′ y,t ).
- the coefficients (a x , b x , c x , and d x ) and (a y , b y , c y , and d y ) in the equation (3) can be calculated using the least squares method using the detected motion vectors (V x,1 and V y,1 ), . . . , (V x,t and V y,t ).
- step S 7 After the motion correction amounts are estimated as described above, the process proceeds to step S 7 .
- step S 7 the motion correction unit 26 subjects the special frame from the frame memory 24 to the motion correction based on the motion correction amounts that are input from the correction amount estimation unit 23 , subjects the differential filter frame from the differential filter processing unit 25 to the motion correction, and outputs the post-motion correction special frame and differential filter frame to the compositing unit 27 .
- the ordinary frame, the special frame, and the differential filter frame that are input to the compositing unit 27 become frames in which the object is accurately aligned in relation to each other.
- step S 8 the compositing unit 27 generates a composite frame by subjecting the ordinary frame and the post-motion correction special frame and differential filter frame to the superposing compositing process or the marking compositing process according to the selection from the user, and outputs the composite frame to the display unit 15 of the subsequent stage.
- O(x,y) is a pixel value of the composite frame
- N(x,y) is a pixel value of the ordinary frame
- Sobel(x,y) is a pixel value of the post-motion correction differential filter frame
- I(x,y) is a pixel value of the post-motion correction special frame.
- C 0 and C 1 are coefficients that control the degree of superposition and may be arbitrarily set by the user.
- FIG. 8 illustrates an impression of the superposing compositing process described above.
- the superposing compositing process it is possible to obtain the composite frame in which the special frame and the differential filter frame are accurately aligned to the ordinary frame, and the edges of a portion to be focused on (a blood vessel, a lesion, or the like) are emphasized and superposed.
- the ordinary frame is subjected to pseudo color conversion using a color matrix process according to color conversion coefficients C that are multiplied by the pixel values of the differential filter frame.
- O(x,y) is a pixel value of the composite frame
- N(x,y) is a pixel value of the ordinary frame
- Sobel(x,y) is a pixel value of the post-motion correction differential filter frame
- C is a color conversion coefficient.
- the post-motion correction special frame is not used in the marking and superposing compositing process.
- the edges of the blood vessel or the like are more strongly subjected to the color conversion, and the other regions are not significantly subjected to the color conversion. Accordingly, it is possible to obtain a composite frame in which only the edges of the blood vessel or the like stand out.
- the endoscope device 10 since the motion vectors are detected using only the ordinary frames and the motion correction amounts are estimated after correcting the detected motion vectors, it is possible to accurately execute the motion correction of the special frame and the differential filter frame. Accordingly, since it is possible to accurately align the information of the special frame of the blood vessel, the tumor, or the like in relation to the ordinary frame, it is possible to allow the user (a medical practitioner performing an operation, or the like) to accurately and clearly visually recognize a tumor portion to be removed and the blood vessel portion not to be removed.
- the composite frame that is presented to the user is created based on the ordinary frame, a composite frame that is bright with little noise in comparison to the special frame can be presented to the user.
- the series of processes described above can be executed using hardware, and can be executed using software.
- the program configuring the software is installed on a computer.
- examples of the computer include a computer embedded within dedicated hardware, and an ordinary personal computer or the like which is capable of executing the various functions due to various programs that are installed thereon.
- FIG. 9 is a block diagram illustrating a configuration example of the hardware of the computer which executes the series of processes described above using a program.
- a central processing unit (CPU) 101 a central processing unit (CPU) 101 , a read only memory (ROM) 102 , and random access memory (RAM) 103 are connected to each other by a bus 104 .
- CPU central processing unit
- ROM read only memory
- RAM random access memory
- an input-output interface 105 is connected to the bus 104 .
- the input-output interface 105 is connected to an input unit 106 , an output unit 107 , a storage unit 108 , a communication unit 109 , and a drive 110 .
- the input unit 106 is formed of a keyboard, a mouse, a microphone, and the like.
- the output unit 107 is formed of a display, a speaker, and the like.
- the storage unit 108 is formed of a hard disk, non-volatile memory, or the like.
- the communication unit 109 is formed of a network interface or the like.
- the drive 110 drives a removable medium 111 such as a magnetic disk, an optical disc, a magneto-optical disc, or a semiconductor memory.
- the series of processes described above is performed by the CPU 101 , for example, loading the program stored in the storage unit 108 into the RAM 103 via the input-output interface 105 and the bus 104 , and executing the loaded program.
- the computer 100 may be a so-called cloud computer that is connected via the Internet, for example.
- the program which is executed by the computer 100 may be a program in which the processes are performed in time series in the order described in the present specification.
- the program may be a program in which the processes are performed in parallel or at the necessary timing such as when the process is called.
- the present disclosure may adopt the following configurations.
- An image processing device including an input unit which inputs ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- the image processing device further including a feature extraction process unit which generates a feature extraction frame by subjecting the special frame to a feature extraction process, in which the motion correction unit further subjects the feature extraction frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors, and in which the compositing unit subjects the ordinary frames to the image compositing process based on the feature extraction frame that is subjected to motion correction.
- a feature extraction process unit which generates a feature extraction frame by subjecting the special frame to a feature extraction process
- the motion correction unit further subjects the feature extraction frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors
- the compositing unit subjects the ordinary frames to the image compositing process based on the feature extraction frame that is subjected to motion correction.
- the image processing device according to any one of (1) to (7), further including a motion vector correction unit which corrects the detected motion vectors based on the plurality of motion vectors that are consecutively detected.
- An image processing method performed by an image processing device including inputting ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; detecting motion vectors of the object from a plurality of the ordinary frames with different imaging timing; subjecting the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and subjecting the ordinary frames to an image compositing process based on the special frame.
- An endoscope device including a light source unit which irradiates an object with ordinary light or special light; an imaging unit which consecutively images, at a predetermined ratio according to a predetermined frame period, ordinary frames in a state in which the object is irradiated with the ordinary light, and a special frame in a state in which the object is irradiated with the special light; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Endoscopes (AREA)
- Closed-Circuit Television Systems (AREA)
- Instruments For Viewing The Inside Of Hollow Bodies (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 14/618,240, filed Feb. 10, 2015, which claims the benefit of Japanese Priority Patent Application No. JP 2014-048336 filed Mar. 12, 2014, the entire contents of which are incorporated herein by reference.
- The present disclosure relates to an image processing device, an image processing method, a program, and an endoscope device. In particular, the present disclosure relates to an image processing device, an image processing method, a program, and an endoscope device, each of which is capable of combining and displaying an ordinary image which is imaged by irradiating a human body with ordinary light such as white light, and a special image which is obtained by irradiating the human body with a special light and illustrates the position of blood vessels.
- In the related art, for example, with the intention of usage in a medical setting, various technologies are proposed in which an ordinary image of an organ or the like that is imaged by an endoscope device is combined with a special image that represents the position of blood vessels or a lesion such as a tumor, which are difficult to discern in the ordinary image, and the result is displayed.
- For example, imaging an ordinary image and a special image using time division is described in Japanese Unexamined Patent Application Publication No. 2007-313171. As another example, performing composite display of the ordinary image and the special image is described in Japanese Unexamined Patent Application Publication No. 2012-24283.
- Here, the term “ordinary image” indicates an image which is imaged by irradiating an organ or the like that serves as the object with ordinary light such as white light. Hereinafter, the ordinary image will also be referred to as an ordinary frame. The term “special image” indicates an image which is imaged by irradiating the object with special light of a predetermined wavelength different from that of the ordinary light. Hereinafter, the special image will also be referred to as the special frame. Note that, when imaging the special image, there is a case in which a fluorescent agent or the like which reacts to the irradiation of the special light is mixed into or applied to the blood vessel (the blood) or the lesion that serves as the object.
- Since combining the ordinary frame and the special frame that are imaged using time division causes a shift in the imaging timing, when there is hand shaking or the object moves, there is a likelihood that the alignment of the ordinary frame with the special frame may not be performed accurately.
- Note that, technology also exists which carries out the compositing after detecting motion vectors between the ordinary frame and the special frame that are imaged using time division and motion correction is performed based on the motion vectors. However, the imaging conditions differ between the ordinary frame and the special frame, errors occur easily in block matching when detecting the motion vectors, and it is difficult to accurately detect the motion vectors.
- It is desirable to enable the accurate alignment and combination of an ordinary frame and a special frame that are imaged using time division.
- According to a first embodiment of the present disclosure, there is provided an image processing device which includes an input unit which inputs ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- In the image processing device, the image processing device may further include a feature extraction process unit which generates a feature extraction frame by subjecting the special frame to a feature extraction process, the motion correction unit may further subject the feature extraction frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors, and the compositing unit may subject the ordinary frames to the image compositing process based on the feature extraction frame that is subjected to motion correction.
- In the image processing device, the feature extraction process unit may generate a differential filter frame as the feature extraction frame by subjecting the special frame to a differential filter process.
- In the image processing device, the compositing unit may subject the ordinary frames to a superposing compositing process or a marking compositing process as the image compositing process.
- In the image processing device, as the superposing compositing process, the compositing unit may add the motion-corrected special frame to the ordinary frames according to the motion-corrected feature extraction frame.
- In the image processing device, as the marking compositing process, the compositing unit may subject the ordinary frames to a color conversion process according to the motion-corrected feature extraction frame.
- In the image processing device, the compositing unit may subject the ordinary frames to a superposing compositing process or a marking compositing process as the image compositing process according to a selection by a user.
- In the image processing device, the image processing device may further include a motion vector correction unit which corrects the detected motion vectors based on the plurality of motion vectors that are consecutively detected.
- According to a first embodiment of the present disclosure, there is provided an image processing method performed by an image processing device. The method includes inputting ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; detecting motion vectors of the object from a plurality of the ordinary frames with different imaging timing; subjecting the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and subjecting the ordinary frames to an image compositing process based on the special frame.
- According to a first embodiment of the present disclosure, there is provided a program for causing a computer to function as an input unit which inputs ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- In the first embodiments of the present disclosure, ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period are input; motion vectors of the object from a plurality of the ordinary frames with different imaging timing are detected; the special frame is subjected to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and the ordinary frames are subjected to an image compositing process based on the special frame.
- According to a second embodiment of the present disclosure, there is provided an endoscope device which includes a light source unit which irradiates an object with ordinary light or special light; an imaging unit which consecutively images, at a predetermined ratio according to a predetermined frame period, ordinary frames in a state in which the object is irradiated with the ordinary light, and a special frame in a state in which the object is irradiated with the special light; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- In the second embodiment of the present disclosure, an object is irradiated with ordinary light or special light; ordinary frames in a state in which the object is irradiated with the ordinary light, and a special frame in a state in which the object is irradiated with the special light are consecutively imaged at a predetermined ratio according to a predetermined frame period; motion vectors of the object are detected from a plurality of the ordinary frames with different imaging timing; the special frame is subjected to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and the ordinary frames are subjected to an image compositing process based on the special frame.
- According to the first embodiments of the present disclosure, it is possible to accurately align and combine ordinary frames and a special frame that are images using time division.
- According to the second embodiment of the present disclosure, it is possible to image ordinary frames and a special frame using time division, and to accurately align and combine the frames.
-
FIG. 1 is a block diagram illustrating a configuration example of an endoscope device to which an embodiment of the present disclosure is applied; -
FIG. 2 is a diagram illustrating imaging timing between ordinary frames and special frames; -
FIG. 3 is a block diagram illustrating a detailed configuration example of an image processing unit ofFIG. 1 ; -
FIG. 4 is a block diagram illustrating a detailed configuration example of a motion vector detection unit ofFIG. 3 ; -
FIG. 5 is a flowchart illustrating an image compositing process; -
FIG. 6 is a diagram illustrating an example of motion correction amount estimation; -
FIG. 7 is a diagram illustrating an impression of correcting dispersion in motion vectors based on a series of motion vectors; -
FIG. 8 is a diagram illustrating an impression of a superposing compositing process; and -
FIG. 9 is a block diagram illustrating a configuration example of a computer. - Hereafter, detailed description will be given of a favorable embodiment for realizing the present disclosure (referred to below as the “embodiment”) with reference to the drawings.
-
FIG. 1 illustrates a configuration example of an endoscope device, which is an embodiment of the present disclosure, that images an ordinary frame and a special frame using time division, accurately aligns and combines the frames, and displays a composite frame that is obtained as a result. - An
endoscope device 10 is configured to include alight source unit 11, animaging unit 12, a developingunit 13, animage processing unit 14, and adisplay unit 15. - The
light source unit 11 switches between ordinary light such as white light and special light that has a predetermined wavelength for each frame that is imaged, and irradiates the object (an organ or the like in the body) therewith. Thelight source unit 11 outputs an irradiation identification signal indicating which of the ordinary light and the special light the object is irradiated with to theimage processing unit 14 for each frame that is imaged. Note that, when irradiating the object with the special light, an optical filter which transmits only a predetermined wavelength may be provided in the light path of the ordinary light. - The
imaging unit 12 images the object in a state in which the ordinary light or the special light is radiated from thelight source unit 11, and outputs an image signal that is obtained as a result to the developingunit 13. The developingunit 13 subjects the image signal that is input thereto from theimaging unit 12 to a developing process such as a mosaic process, and outputs the image signal resulting from the process (the ordinary frame when the ordinary light is radiated, and the special frame when the special light is radiated) to theimage processing unit 14. - Here, in the special frame, the blood vessel or the lesion such as a tumor is made clearer in comparison to in an ordinary frame; however, in contrast, the brightness of the entire frame is dark and there is much noise.
- Meanwhile, in the ordinary frame, the entire frame is bright in comparison to the special frame, and there is little noise; however, in contrast, it is difficult to distinguish the blood vessel or the lesion such as a tumor.
- The
image processing unit 14 detects motion vectors using two ordinary frames with different imaging timing. By subjecting the special frame to a differential filter process, theimage processing unit 14 generates a frame (hereinafter referred to as a differential filter frame) in which edge portions (specifically, the contours or the like of the blood vessel or the lesion, for example) within the image are emphasized. Furthermore, theimage processing unit 14 performs motion correction on each of the special frame and the differential filter frame based on the motion vectors that are detected from the ordinary frame, combines the ordinary frame with the special frame and the differential filter frame which are subjected to the motion correction, and outputs the composite frame that is obtained as a result to thedisplay unit 15. - The
display unit 15 displays the composite frame. - Next, an example of the imaging timing of the ordinary frames and the special frames is illustrated in
FIG. 2 . - In the
endoscope device 10, ordinary frames are imaged for several continuous frames, and a special frame is imaged periodically between the ordinary frames. For example, as illustrated inFIG. 2 , the imaging ratio of ordinary frames to special frames is set to 4:1. - However, the ratio is not limited to 4:1, and may be variable. In
FIG. 2 , Ta illustrates a timing at which an ordinary frame is imaged one frame before a special frame is imaged, Tb illustrates a timing at which a special frame is imaged, and Tc, Td, and Te illustrate timings at which ordinary frames are imaged 1, 2, and 3 frames, respectively, after the special frame is imaged. Ta to Te will be used in the description of the detection of motion vectors described later. - Next, a configuration example of the
image processing unit 14 is illustrated inFIG. 3 . - The
image processing unit 14 is configured to include aswitching unit 21, a motionvector detection unit 22, a correctionamount estimation unit 23, aframe memory 24, a differentialfilter processing unit 25, amotion correction unit 26, and acompositing unit 27. - In the
image processing unit 14, the ordinary frames and the special frames that are input thereto from the developingunit 13 of the previous stage are input to theswitching unit 21, and the irradiation identification signal from thelight source unit 11 is input to theswitching unit 21, the motionvector detection unit 22, and the correctionamount estimation unit 23. - The switching
unit 21 determines whether or not the input from the developingunit 13 is a special frame based on the irradiation identification signal, when the input is not a special frame (is an ordinary frame), outputs the ordinary frame to the motionvector detection unit 22 and thecompositing unit 27, and when the input is a special frame, the special frame is output to theframe memory 24. - For each frame period, the motion
vector detection unit 22 detects the motion vectors using two ordinary frames with different imaging timing, and outputs the detected motion vectors to the correctionamount estimation unit 23. - The correction
amount estimation unit 23 estimates the motion correction amounts of the special frame and the differential filter frame based on the motion vectors that are detected by the motionvector detection unit 22, and outputs the estimated motion correction amounts to themotion correction unit 26. Note that, the correctionamount estimation unit 23 is capable of correcting motion vectors which may be erroneously detected based on the motion vectors that are detected in succession, and is capable of estimating the motion correction amounts based on the corrected motion vectors. - The
frame memory 24 holds the special frame that is input thereto from the switchingunit 21, and supplies the held special frame to the differentialfilter processing unit 25 and themotion correction unit 26 for each frame period. Theframe memory 24 updates the held special frame when the next special frame is input thereto from the switchingunit 21. - The differential
filter processing unit 25 generates a feature extraction frame in which features in the image are emphasized by subjecting the special frame that is supplied thereto from theframe memory 24 to a differential filter process (for example, the Sobel filter process), and outputs the feature extraction frame to themotion correction unit 26. Note that, in the case of the differential filter process, a differential filter frame in which the edge portions are emphasized is generated as the feature extraction frame. As described above, since theframe memory 24 supplies the special frame held therein for each frame period, the same special frames are consecutively supplied. In this case, the differentialfilter processing unit 25 may omit the differential filter process and output the result of the previous differential filter process to themotion correction unit 26. - Note that, as described above, instead of generating the differential filter frame using the differential filter process, for example, a process may be executed in which a region in which the variance or the dynamic range in a micro-block (of 3×3 pixels, for example) is greater than or equal to a threshold that is extracted, and a feature extraction frame indicating the extraction results is generated. As another example, a process may be executed in which a region in which the signal levels of pixels are within a specific threshold, that is, a region with specific RGB levels is extracted, and a feature extraction frame indicating the extraction results is generated. As still another example, a closed region (corresponding to a tumor or the like) may be subjected to a contour detection process such as snakes, and a feature extraction frame indicating the results may be generated.
- The
motion correction unit 26 subjects the special frame from theframe memory 24 to the motion correction based on the motion correction amounts that are input from the correctionamount estimation unit 23, subjects the differential filter frame from the differentialfilter processing unit 25 to the motion correction, and outputs the post-motion correction special frame and differential filter frame to thecompositing unit 27. - The
compositing unit 27 includes a superposingunit 28 and a markingunit 29, using the ordinary frame and the post-motion correction special frame and differential filter frame as the input, generates a composite frame by performing a superposing compositing process by the superposingunit 28 or a marking compositing process by the markingunit 29, and outputs the composite frame to thedisplay unit 15 of the subsequent stage. - Next, a configuration example of the motion
vector detection unit 22 is illustrated inFIG. 4 . The motionvector detection unit 22 is configured to includeframe memories frame selection unit 33, ablock matching unit 34, and avector correction unit 35. - In the motion
vector detection unit 22, the ordinary frame that is input thereto from the switchingunit 21 of the previous stage is input to theframe memory 31 and theframe selection unit 33. - For each frame period, the
frame memory 31 outputs the ordinary frame that is held therein until that point to theframe memory 32 and theframe selection unit 33, and updates the data held therein until that point with the ordinary frame that is input from the switchingunit 21 of the previous stage. In the same manner, for each frame period, theframe memory 32 outputs the ordinary frame that is held therein to theframe selection unit 33, and updates the data held therein with the ordinary frame that is input from theframe memory 31 of the previous stage. - However, among frame periods, at a timing at which the ordinary frame is not input to the motion
vector detection unit 22, theframe memory 31 outputs the ordinary frame that is held until that point to the subsequent stage, and clears the data that is held until that point. - At the next timing, since the
frame memory 31 is not holding any data, the output to the subsequent stage is not performed. Theframe memory 32 outputs the ordinary frame that is held until that point to the subsequent stage, and clears the data that is held until that point. - Therefore, two or three ordinary frames with different imaging timing are input to the
frame selection unit 33 at the same time. - When two ordinary frames are input to the
frame selection unit 33 at the same time, the two ordinary frames are output to theblock matching unit 34. When three ordinary frames are input to theframe selection unit 33 at the same time, the two ordinary frames that are input from theframe memories block matching unit 34. Theblock matching unit 34 detects the motion vectors between the two ordinary frames using a block matching process. - The
vector correction unit 35 determines the relationship between the two ordinary frames that are used for the motion vectors based on the irradiation identification signal, corrects the detected motion vectors based on the relationship, and outputs the motion vectors to the correctionamount estimation unit 23. - Detailed description will be given of the correction of the motion vectors by the
vector correction unit 35. If the output from theframe memory 31 is used as a reference, when the reference imaging timing is the Ta illustrated inFIG. 2 , the ordinary frame from theframe memory 32 that is one frame prior to the reference, and the reference ordinary frame from theframe memory 31 are input to theframe selection unit 33, and the motion vectors are detected from the two ordinary frames. In this case, thevector correction unit 35 does not perform the motion vector correction. - When the reference imaging timing is the Tb illustrated in
FIG. 2 , since the Tb is the imaging timing of the special frame, theframe memory 31 does not perform output. The ordinary frame from theframe memory 32 that is one frame prior to the reference, and the ordinary frame from the switchingunit 21 that is one frame after the reference are input to theframe selection unit 33, and the motion vectors are detected from the two ordinary frames. In that case, since the detected motion vectors are from between ordinary frames that are two frames separated from each other, thevector correction unit 35 multiplies each of the vertical and horizontal components of the detected motion vectors by ½. - When the reference imaging timing is the Tc illustrated in
FIG. 2 , the reference ordinary frame from theframe memory 31, and the ordinary frame from the switchingunit 21 that is one frame after the reference are input to theframe selection unit 33, and the motion vectors are detected from the two ordinary frames. In that case, since the directions of the detected motion vectors oppose each other, thevector correction unit 35 multiplies each of the vertical and horizontal components of the detected motion vectors by −1. - When the reference imaging timing is the Td illustrated in
FIG. 2 , the ordinary frame from theframe memory 32 that is one frame prior to the reference, the reference ordinary frame from theframe memory 31, and the ordinary frame from the switchingunit 21 that is one frame after the reference are input to theframe selection unit 33, and the motion vectors are detected from the two ordinary frames from theframe memories vector correction unit 35 does not perform the motion vector correction. - When the reference imaging timing is the Te illustrated in
FIG. 2 , the ordinary frame from theframe memory 32 that is one frame prior to the reference, the reference ordinary frame from theframe memory 31, and the ordinary frame from the switchingunit 21 that is one frame after the reference are input to theframe selection unit 33, and the motion vectors are detected from the two ordinary frames from theframe memories vector correction unit 35 does not perform the motion vector correction. - The motion vectors that are corrected as described above are output from the
vector correction unit 35 to the correctionamount estimation unit 23 of the subsequent state. - Next, description will be given of the image compositing process by the
image processing unit 14 with reference toFIG. 5 . -
FIG. 5 is a flowchart illustrating an image compositing process. The image compositing process is executed for each frame period. - In step S1, the switching
unit 21 determines whether or not the input from the developingunit 13 is a special frame based on the irradiation identification signal, and when the input is a special frame, the special frame is output to theframe memory 24. Conversely, when it is determined that the input is not a special frame (is an ordinary frame), the switchingunit 21 outputs the ordinary frame to the motionvector detection unit 22 and thecompositing unit 27. - In step S2, the
frame memory 24 supplies the special frame that is held until that point to the differentialfilter processing unit 25 and themotion correction unit 26. Note that, theframe memory 24 updates the held special frame when the special frame is input thereto from the switchingunit 21. - In step S3, the differential
filter processing unit 25 generates a differential filter frame in which edge portions in the image are emphasized by subjecting the special frame that is supplied thereto from theframe memory 24 to a differential filter process (for example, the Sobel filter process such as the one illustrated in the following equation (1)), and outputs the differential filter frame to themotion correction unit 26. -
SobelRh(x,y)=|−R(x−1,y−1)−2R(x−1,y)−R(x−1,y−1)+R(x+1,y−1)+2R(x+1,y)+R(x+1,y+1)| -
SobelRv(x,y)=|−R(x−1,y−1)−2R(x,y−1)−R(x+1,y−1)+R(x−1,y+1)+2R(x,y+1)+R(x+1,y+1)| -
SobelR(x,y)=SobelRh(x,y)+SobelRv(x,y) -
SobelGh(x,y)=|−G(x−1,y−1)−2G(x−1,y)−G(x−1,y+1)+G(x+1,y−1)+2G(x+1,y)+0(x+1,y+1)| -
SobelGv(x,y)=|−G(x−1,y−1)−M(x,y−1)−G(x+1,y−1)+G(x−1,y+1)+2G(x,y+1)+G(x+1,y+1)| -
SobelG(x,y)=SobelGh(x,y)+SobelGv(x,y) -
SobelBh(x,y)=|−B(x−1,y−1)−2B(x−1,y)−B(x−1,y+1)+B(x+1,y−1)+2B(x−1,y)+B(x+1,y+1)| -
SobelBv(x,y)=|(x−1,y−1)−2B(x,y−1)−B(x+1,y−1)+B(x−1,y+1)+2B(x,y+1)+B(x+1,y+1)| -
SobelB(x,y)=SobelBh(x,y)+SobelBv(x,y) - Note that, R, G, and B in the equation (1) respectively correspond to levels in the R, G, and B planes of the special frame.
- In step S4, the motion
vector detection unit 22 detects the motion vectors using two ordinary frames with different imaging timing, and outputs the motion vectors to the correctionamount estimation unit 23. In step S5, the correction amount estimation unit 3 determines whether or not the detected motion vectors are less than or equal to a predetermined threshold, and when the detected motion vectors are less than or equal to the predetermined threshold, the process proceeds to step S6 in order to use the motion vectors in the motion correction. Conversely, when the detected motion vectors are greater than the predetermined threshold, the motion vectors are not used in the motion correction. In this case, the image compositing process that corresponds to the present imaging timing ends. - In step S6, the correction
amount estimation unit 23 estimates the motion correction amounts of the special frame and the differential filter frame based on the motion vectors that are detected by the motionvector detection unit 22, and outputs the estimated motion correction amounts to themotion correction unit 26. Specifically, for example, the motion correction amounts Hx and Hy are computed as illustrated in the following equation (2). -
- In the equation (2), Vx and Vy are motion vectors that are detected and corrected, N represents the imaging timing t=N of the ordinary frame for which the motion vectors are detected in relation to the imaging timing t=0 of the special frame for which correction is performed.
- Note that, in the correction
amount estimation unit 23, it is also possible to correct the dispersion in the motion vectors and subsequently estimate the motion correction amounts based on the series of motion vectors, as described hereinafter, as another motion correction amount estimation method. -
FIG. 6 is a diagram illustrating a process flow in which the dispersion in the motion vectors is corrected and the motion correction amounts are subsequently estimated based on the series of motion vectors.FIG. 7 illustrates an impression of correcting the dispersion in the motion vectors based on the series of motion vectors. - Specifically, the motion vectors (V′x,t, and V′y,t) in relation to the imaging timing t are estimated as illustrated in the following equation (3).
-
V′ x,t =a x t 3 +b x t 2 +c x t+d x -
V′ y,t =a y t 3 +b y t 2 +c y t+d y (3) - The motion correction amounts Hx and Hy are computed using the following equation (4) by substituting the motion vectors of the equation (2) with the estimated motion vectors (V′x,t and V′y,t).
-
- Note that, the coefficients (ax, bx, cx, and dx) and (ay, by, cy, and dy) in the equation (3) can be calculated using the least squares method using the detected motion vectors (Vx,1 and Vy,1), . . . , (Vx,t and Vy,t).
- After the motion correction amounts are estimated as described above, the process proceeds to step S7.
- In step S7, the
motion correction unit 26 subjects the special frame from theframe memory 24 to the motion correction based on the motion correction amounts that are input from the correctionamount estimation unit 23, subjects the differential filter frame from the differentialfilter processing unit 25 to the motion correction, and outputs the post-motion correction special frame and differential filter frame to thecompositing unit 27. The ordinary frame, the special frame, and the differential filter frame that are input to thecompositing unit 27 become frames in which the object is accurately aligned in relation to each other. - In step S8, the
compositing unit 27 generates a composite frame by subjecting the ordinary frame and the post-motion correction special frame and differential filter frame to the superposing compositing process or the marking compositing process according to the selection from the user, and outputs the composite frame to thedisplay unit 15 of the subsequent stage. - Description will be given of the superposing compositing process. As illustrated in the following equation (5), in the superposing compositing process, the result of multiplying the post-motion correction differential filter frame and special frame with each other is added to the ordinary frame.
-
O R(x,y)=C 0 ×N R(x,y)+C 1×SobelR(x,y)×I R(x,y) -
O G(x,y)=C 0 ×N G(x,y)+C 1×SobelG(x,y)×I G(x,y) -
O B(x,y)=C 0 x N B(x,y)+C 1×SobelB(x,y)×I B(x,y) (5) - In the equation (5), O(x,y) is a pixel value of the composite frame, N(x,y) is a pixel value of the ordinary frame, Sobel(x,y) is a pixel value of the post-motion correction differential filter frame, and I(x,y) is a pixel value of the post-motion correction special frame. C0 and C1 are coefficients that control the degree of superposition and may be arbitrarily set by the user.
-
FIG. 8 illustrates an impression of the superposing compositing process described above. In the superposing compositing process, it is possible to obtain the composite frame in which the special frame and the differential filter frame are accurately aligned to the ordinary frame, and the edges of a portion to be focused on (a blood vessel, a lesion, or the like) are emphasized and superposed. - Next, description will be given of the marking compositing process. As illustrated in the following equation (6), in the marking compositing process, the ordinary frame is subjected to pseudo color conversion using a color matrix process according to color conversion coefficients C that are multiplied by the pixel values of the differential filter frame.
-
- In the equation (6), O(x,y) is a pixel value of the composite frame, N(x,y) is a pixel value of the ordinary frame, Sobel(x,y) is a pixel value of the post-motion correction differential filter frame, and C is a color conversion coefficient.
- As is clear from the equation (6), the post-motion correction special frame is not used in the marking and superposing compositing process.
- According to the marking and superposing compositing process, since the degree of color conversion is controlled according to the pixel values of the differential filter frame, the edges of the blood vessel or the like are more strongly subjected to the color conversion, and the other regions are not significantly subjected to the color conversion. Accordingly, it is possible to obtain a composite frame in which only the edges of the blood vessel or the like stand out.
- The description of the image compositing process ends with the above description.
- As described above, according to the
endoscope device 10 that serves as the present embodiment, since the motion vectors are detected using only the ordinary frames and the motion correction amounts are estimated after correcting the detected motion vectors, it is possible to accurately execute the motion correction of the special frame and the differential filter frame. Accordingly, since it is possible to accurately align the information of the special frame of the blood vessel, the tumor, or the like in relation to the ordinary frame, it is possible to allow the user (a medical practitioner performing an operation, or the like) to accurately and clearly visually recognize a tumor portion to be removed and the blood vessel portion not to be removed. - Since the composite frame that is presented to the user is created based on the ordinary frame, a composite frame that is bright with little noise in comparison to the special frame can be presented to the user.
- Incidentally, the series of processes described above can be executed using hardware, and can be executed using software. When the series of processes is executed using software, the program configuring the software is installed on a computer. Here, examples of the computer include a computer embedded within dedicated hardware, and an ordinary personal computer or the like which is capable of executing the various functions due to various programs that are installed thereon.
-
FIG. 9 is a block diagram illustrating a configuration example of the hardware of the computer which executes the series of processes described above using a program. - In a
computer 100, a central processing unit (CPU) 101, a read only memory (ROM) 102, and random access memory (RAM) 103 are connected to each other by abus 104. - Furthermore, an input-
output interface 105 is connected to thebus 104. The input-output interface 105 is connected to aninput unit 106, anoutput unit 107, astorage unit 108, acommunication unit 109, and adrive 110. - The
input unit 106 is formed of a keyboard, a mouse, a microphone, and the like. Theoutput unit 107 is formed of a display, a speaker, and the like. Thestorage unit 108 is formed of a hard disk, non-volatile memory, or the like. Thecommunication unit 109 is formed of a network interface or the like. Thedrive 110 drives aremovable medium 111 such as a magnetic disk, an optical disc, a magneto-optical disc, or a semiconductor memory. - In the
computer 100 configured as described above, the series of processes described above is performed by theCPU 101, for example, loading the program stored in thestorage unit 108 into theRAM 103 via the input-output interface 105 and thebus 104, and executing the loaded program. - The
computer 100 may be a so-called cloud computer that is connected via the Internet, for example. - Note that, the program which is executed by the
computer 100 may be a program in which the processes are performed in time series in the order described in the present specification. The program may be a program in which the processes are performed in parallel or at the necessary timing such as when the process is called. - The embodiments of the present disclosure are not limited to the embodiment described above, and various modifications may be made within a scope not departing from the main concept of the present disclosure.
- Furthermore, the present disclosure may adopt the following configurations.
- (1) An image processing device, including an input unit which inputs ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- (2) The image processing device according to (1), further including a feature extraction process unit which generates a feature extraction frame by subjecting the special frame to a feature extraction process, in which the motion correction unit further subjects the feature extraction frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors, and in which the compositing unit subjects the ordinary frames to the image compositing process based on the feature extraction frame that is subjected to motion correction.
- (3) The image processing device according to (2), in which the feature extraction process unit generates a differential filter frame as the feature extraction frame by subjecting the special frame to a differential filter process.
- (4) The image processing device according to any one of (1) to (3), in which the compositing unit subjects the ordinary frames to a superposing compositing process or a marking compositing process as the image compositing process.
- (5) The image processing device according to (4), in which as the superposing compositing process, the compositing unit adds the motion-corrected special frame to the ordinary frames according to the motion-corrected feature extraction frame.
- (6) The image processing device according to (4), in which as the marking compositing process, the compositing unit subjects the ordinary frames to a color conversion process according to the motion-corrected feature extraction frame.
- (7) The image processing device according to any one of (4) to (6), in which the compositing unit subjects the ordinary frames to a superposing compositing process or a marking compositing process as the image compositing process according to a selection by a user.
- (8) The image processing device according to any one of (1) to (7), further including a motion vector correction unit which corrects the detected motion vectors based on the plurality of motion vectors that are consecutively detected.
- (9) An image processing method performed by an image processing device, the method including inputting ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; detecting motion vectors of the object from a plurality of the ordinary frames with different imaging timing; subjecting the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and subjecting the ordinary frames to an image compositing process based on the special frame.
- (10) A program for causing a computer to function as an input unit which inputs ordinary frames in a state in which an object is irradiated with ordinary light, and a special frame in a state in which the object is irradiated with special light, which are imaged consecutively at a predetermined ratio according to a predetermined frame period; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- (11) An endoscope device, including a light source unit which irradiates an object with ordinary light or special light; an imaging unit which consecutively images, at a predetermined ratio according to a predetermined frame period, ordinary frames in a state in which the object is irradiated with the ordinary light, and a special frame in a state in which the object is irradiated with the special light; a detection unit which detects motion vectors of the object from a plurality of the ordinary frames with different imaging timing; a motion correction unit which subjects the special frame to motion correction corresponding to the imaging timing of the ordinary frames based on the detected motion vectors; and a compositing unit which subjects the ordinary frames to an image compositing process based on the special frame.
- It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/020,756 US10306147B2 (en) | 2014-03-12 | 2018-06-27 | Image processing device, image processing method, program, and endoscope device |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014-048336 | 2014-03-12 | ||
JP2014048336A JP2015171450A (en) | 2014-03-12 | 2014-03-12 | Image processing device, image processing method, program, and endoscope apparatus |
US14/618,240 US10021306B2 (en) | 2014-03-12 | 2015-02-10 | Image processing device, image processing method, program, and endoscope device |
US16/020,756 US10306147B2 (en) | 2014-03-12 | 2018-06-27 | Image processing device, image processing method, program, and endoscope device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/618,240 Continuation US10021306B2 (en) | 2014-03-12 | 2015-02-10 | Image processing device, image processing method, program, and endoscope device |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180309933A1 true US20180309933A1 (en) | 2018-10-25 |
US10306147B2 US10306147B2 (en) | 2019-05-28 |
Family
ID=54070382
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/618,240 Active 2036-03-01 US10021306B2 (en) | 2014-03-12 | 2015-02-10 | Image processing device, image processing method, program, and endoscope device |
US16/020,756 Active US10306147B2 (en) | 2014-03-12 | 2018-06-27 | Image processing device, image processing method, program, and endoscope device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/618,240 Active 2036-03-01 US10021306B2 (en) | 2014-03-12 | 2015-02-10 | Image processing device, image processing method, program, and endoscope device |
Country Status (3)
Country | Link |
---|---|
US (2) | US10021306B2 (en) |
JP (1) | JP2015171450A (en) |
CN (1) | CN104915920A (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4171169A1 (en) | 2015-08-31 | 2023-04-26 | Ntt Docomo, Inc. | User terminal, radio base station and radio communication method |
JP2019502419A (en) * | 2015-11-05 | 2019-01-31 | コヴィディエン リミテッド パートナーシップ | System and method for detection of subsurface blood |
KR102192488B1 (en) * | 2015-11-25 | 2020-12-17 | 삼성전자주식회사 | Apparatus and method for frame rate conversion |
WO2017212653A1 (en) * | 2016-06-10 | 2017-12-14 | オリンパス株式会社 | Image processing device, image processing method, and image processing program |
WO2020050187A1 (en) * | 2018-09-06 | 2020-03-12 | ソニー株式会社 | Medical system, information processing device, and information processing method |
EP4024115A4 (en) * | 2019-10-17 | 2022-11-02 | Sony Group Corporation | Surgical information processing device, surgical information processing method, and surgical information processing program |
KR102504321B1 (en) * | 2020-08-25 | 2023-02-28 | 한국전자통신연구원 | Apparatus and method for online action detection |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110020536A1 (en) * | 2008-03-26 | 2011-01-27 | Taisuke Yamamoto | Electrode for lithium secondary battery and method of manufacturing same |
US20110023789A1 (en) * | 2007-12-10 | 2011-02-03 | The Cooper Union For The Advancement Of Science And Art | Gas delivery system for an animal storage container |
US20150022370A1 (en) * | 2013-07-22 | 2015-01-22 | Google Inc. | Methods, systems, and media for projecting light to indicate a device status |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6975898B2 (en) * | 2000-06-19 | 2005-12-13 | University Of Washington | Medical imaging, diagnosis, and therapy using a scanning single optical fiber system |
FR2811791B1 (en) * | 2000-07-13 | 2002-11-22 | France Telecom | MOTION ESTIMATOR FOR CODING AND DECODING IMAGE SEQUENCES |
JP2007313171A (en) | 2006-05-29 | 2007-12-06 | Olympus Corp | Endoscopic system |
US8896701B2 (en) * | 2010-02-23 | 2014-11-25 | Ratheon Company | Infrared concealed object detection enhanced with closed-loop control of illumination by.mmw energy |
JP2011200367A (en) * | 2010-03-25 | 2011-10-13 | Fujifilm Corp | Image pickup method and device |
JP5603676B2 (en) * | 2010-06-29 | 2014-10-08 | オリンパス株式会社 | Image processing apparatus and program |
JP2012024283A (en) | 2010-07-22 | 2012-02-09 | Fujifilm Corp | Endoscope diagnostic apparatus |
US9999355B2 (en) * | 2014-02-12 | 2018-06-19 | Koninklijke Philips N.V. | Device, system and method for determining vital signs of a subject based on reflected and transmitted light |
-
2014
- 2014-03-12 JP JP2014048336A patent/JP2015171450A/en active Pending
-
2015
- 2015-02-10 US US14/618,240 patent/US10021306B2/en active Active
- 2015-03-05 CN CN201510097464.3A patent/CN104915920A/en active Pending
-
2018
- 2018-06-27 US US16/020,756 patent/US10306147B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110023789A1 (en) * | 2007-12-10 | 2011-02-03 | The Cooper Union For The Advancement Of Science And Art | Gas delivery system for an animal storage container |
US20110020536A1 (en) * | 2008-03-26 | 2011-01-27 | Taisuke Yamamoto | Electrode for lithium secondary battery and method of manufacturing same |
US20150022370A1 (en) * | 2013-07-22 | 2015-01-22 | Google Inc. | Methods, systems, and media for projecting light to indicate a device status |
Also Published As
Publication number | Publication date |
---|---|
JP2015171450A (en) | 2015-10-01 |
US10021306B2 (en) | 2018-07-10 |
CN104915920A (en) | 2015-09-16 |
US20150264264A1 (en) | 2015-09-17 |
US10306147B2 (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10306147B2 (en) | Image processing device, image processing method, program, and endoscope device | |
CN113454981B (en) | Techniques for multi-exposure fusion of multiple image frames based on convolutional neural networks and for deblurring the multiple image frames | |
US10691393B2 (en) | Processing circuit of display panel, display method and display device | |
US9916666B2 (en) | Image processing apparatus for identifying whether or not microstructure in set examination region is abnormal, image processing method, and computer-readable recording device | |
EP2533193B1 (en) | Apparatus and method for image processing | |
EP3076364A1 (en) | Image filtering based on image gradients | |
US9953422B2 (en) | Selective local registration based on registration error | |
KR102570562B1 (en) | Image processing apparatus and operating method for the same | |
US8660379B2 (en) | Image processing method and computer program | |
CN103310430A (en) | Method and apparatus for deblurring non-uniform motion blur | |
US11720745B2 (en) | Detecting occlusion of digital ink | |
US9292921B2 (en) | Method and system for contrast inflow detection in 2D fluoroscopic images | |
WO2017161767A1 (en) | Element display method and device | |
EP3217659A1 (en) | Image processing apparatus, image processing method, and program | |
CN113379702B (en) | Blood vessel path extraction method and device for microcirculation image | |
EP3559900B1 (en) | System and method for compensation of reflection on a display device | |
US10921410B2 (en) | Method and system for susceptibility weighted magnetic resonance imaging | |
US10580132B2 (en) | Medical image processing apparatus, control method therefor, and non-transitory storage medium storing program | |
JPWO2018159288A1 (en) | Image processing apparatus, image processing method, and program | |
CN113989171A (en) | Subtraction map generation method and device, storage medium and computer equipment | |
US10937156B2 (en) | Saliency mapping of imagery during artificially intelligent image classification | |
KR20200041773A (en) | Apparatus for compansating cancer region information and method for the same | |
EP2287805A1 (en) | Image processing device, image processing method, and information storage medium | |
CN113344809B (en) | Ultrasonic image enhancement method, system and equipment | |
US20220309619A1 (en) | Image processing apparatus, image processing method, and computer readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |