WO2009145157A1 - 信号処理システム及び信号処理プログラム - Google Patents
信号処理システム及び信号処理プログラム Download PDFInfo
- Publication number
- WO2009145157A1 WO2009145157A1 PCT/JP2009/059551 JP2009059551W WO2009145157A1 WO 2009145157 A1 WO2009145157 A1 WO 2009145157A1 JP 2009059551 W JP2009059551 W JP 2009059551W WO 2009145157 A1 WO2009145157 A1 WO 2009145157A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- subject
- dedicated
- coefficient
- identified
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/05—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0075—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
- A61B5/0084—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for introduction into the body, e.g. by catheters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/1032—Determining colour of tissue for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
- A61B5/1455—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
- A61B5/14551—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
- A61B5/1455—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
- A61B5/1459—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters invasive, e.g. introduced into the body by a catheter
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7246—Details of waveform analysis using correlation, e.g. template matching or determination of similarity
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/50—Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
- G01J3/51—Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors using colour filters
- G01J3/513—Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors using colour filters having fixed filter-detector pairs
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/52—Measurement of colour; Colour measuring devices, e.g. colorimeters using colour charts
- G01J3/524—Calibration of colorimeters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/135—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
- H04N25/136—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements using complementary colours
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B23/00—Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
- G02B23/24—Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
- G02B23/2476—Non-optical details, e.g. housings, mountings, supports
- G02B23/2484—Arrangements in relation to a camera or imaging device
Definitions
- the present invention relates to a signal processing system for identifying a subject and a signal processing program for causing a computer to execute a procedure of such a signal processing system.
- Patent Document 1 discloses an example in which a specific narrowband video signal is calculated by signal processing using broadband light as observation light. As a result, a high-definition narrow-band video signal with less noise can be obtained, so that a subject having specific spectral characteristics such as a blood vessel can be identified and displayed on a display monitor. Becomes easy.
- the present invention has been made in view of the above points, and an object of the present invention is to provide a signal processing system and a signal processing program that can identify a subject to be identified with high reliability.
- a base vector acquisition unit that acquires a dedicated base vector based on the known spectral characteristics of a subject to be identified whose spectral characteristics are known;
- System spectroscopy for acquiring spectral characteristics of an imaging system including spectral characteristics relating to a color imaging system used for imaging a subject including the subject to be identified and spectral characteristics relating to illumination light used when imaging the subject by the color imaging system
- a characteristic acquisition unit A calculation unit for calculating a weighting coefficient related to the dedicated base vector based on a video signal obtained by imaging the subject by the color imaging system, the dedicated base vector, and spectral characteristics of the imaging system;
- An output signal calculation unit that calculates an output signal as an identification result of a subject to be identified whose spectral characteristics are known, based on a weighting factor related to the dedicated basis vector calculated by the calculation unit;
- a signal processing system is provided.
- acquisition means reading from a recording medium or reading via a network.
- the known spectral characteristics of the subject to be identified whose spectral characteristics are known
- the spectral characteristics related to the color imaging system used for imaging the subject including the subject to be identified and
- a derivation coefficient acquisition unit that acquires a derivation coefficient indicating the correlation between the known spectral characteristic of the subject and the video signal, which is calculated based on the spectral characteristic of the illumination light used when the subject is imaged by the color imaging system.
- a correlation coefficient calculating unit that calculates a correlation coefficient between the spectral characteristics of the subject to be identified and the video signal based on the video signal and the derived coefficient; Based on the correlation coefficient calculated by the correlation coefficient calculation unit, an output signal calculation unit that calculates an output signal as an identification result of a subject to be identified whose spectral characteristics are known; A signal processing system is provided.
- a signal processing program for causing a computer to perform the above is provided.
- the known spectral characteristics of the subject that is calculated based on the known spectral characteristics of the subject to be identified, the spectral characteristics of the color imaging system, and the spectral characteristics of the illumination light used when imaging the subject by the color imaging system.
- a signal processing program for causing a computer to perform the above is provided.
- FIG. 1 is a diagram showing a configuration of an endoscope to which a signal processing system according to a first embodiment of the present invention is applied.
- FIG. 2 is a diagram illustrating a configuration of a Bayer-type primary color filter.
- FIG. 3 is a diagram illustrating an example of the configuration of the calculation unit in FIG. 1.
- FIG. 4 is a diagram illustrating an example of three types of dedicated basis vectors.
- FIG. 5 is a diagram illustrating an example of a spectral luminance characteristic of a light source as a spectral characteristic related to illumination light used at the time of imaging.
- FIG. 6 is a diagram illustrating an example of spectral sensitivity characteristics of a color imaging system including R, G, and B color filters as spectral characteristics related to the color imaging system.
- FIG. 1 is a diagram showing a configuration of an endoscope to which a signal processing system according to a first embodiment of the present invention is applied.
- FIG. 2 is a diagram illustrating a configuration of a
- FIG. 7 is a diagram showing a dedicated basis vector of oxyhemoglobin and two general-purpose basis vectors as examples of the three types of basis vectors in the first modification of the first embodiment.
- FIG. 8 is a diagram showing deoxyhemoglobin dedicated basis vectors and two general-purpose basis vectors as examples of the three types of basis vectors in Modification 1 of the first embodiment.
- FIG. 9 is a diagram illustrating the configuration of the R, Gr, Gb, and B primary color filters in the second modification of the first embodiment.
- FIG. 10 is a diagram illustrating a configuration of a color difference line sequential complementary color filter according to Modification 2 of the first embodiment.
- FIG. 11 is a diagram illustrating spectral sensitivity characteristics of a color imaging system (four complementary colors) in Modification 2 of the first embodiment.
- FIG. 12 is a diagram illustrating oxyhemoglobin dedicated basis vectors and three general-purpose basis vectors as examples of the four types of basis vectors in Modification 2 of the first embodiment.
- FIG. 13 is a diagram showing dedicated base vectors of oxyhemoglobin and deoxyhemoglobin and two general-purpose base vectors as examples of the four types of base vectors in Modification 2 of the first embodiment.
- FIG. 14 is a diagram illustrating a configuration of an endoscope to which the signal processing system according to the third modification of the first embodiment is applied.
- FIG. 15 is a diagram illustrating a configuration of an endoscope to which the signal processing system according to the fourth modification of the first embodiment is applied.
- FIG. 16 is a diagram illustrating a configuration of an endoscope to which the signal processing system according to the fifth modification of the first embodiment is applied.
- FIG. 17 is a diagram illustrating a configuration of an endoscope to which a signal processing system according to Modification 6 of the first embodiment is applied.
- FIG. 18 is a diagram illustrating a flowchart regarding the software processing of the signal processing in the modified example 7 of the first embodiment.
- FIG. 19 is a diagram showing a flowchart regarding the calculation processing in FIG.
- FIG. 20 is a diagram showing a configuration of an endoscope to which the signal processing system according to the second embodiment of the present invention is applied.
- FIG. 21 is a diagram illustrating an example of the configuration of the second calculation unit in FIG. 20.
- FIG. 22 is a diagram illustrating a configuration of an endoscope to which the signal processing system according to the first modification of the second embodiment is applied.
- FIG. 23 is a diagram illustrating an emphasis gain generation function.
- FIG. 24 is a diagram illustrating a flowchart regarding software processing of signal processing in the second modification of the second embodiment.
- FIG. 25 is a diagram illustrating a flowchart regarding the second calculation process in FIG. 24.
- FIG. 26 is a diagram showing a configuration of a microscope to which the signal processing system according to the third embodiment of the present invention is applied.
- FIG. 27 is a diagram illustrating an example of the configuration of the correlation coefficient calculation unit in FIG.
- FIG. 28 is a diagram illustrating a configuration of a microscope to which the signal processing system according to the first modification of the third embodiment is applied.
- FIG. 29 is a diagram illustrating a configuration of a microscope to which the signal processing system according to the second modification of the third embodiment is applied.
- FIG. 30 is a diagram illustrating a configuration of a microscope to which the signal processing system according to the third modification of the third embodiment is applied.
- FIG. 31 is a flowchart illustrating signal processing software processing in Modification 4 of the third embodiment.
- FIG. 32 is a diagram showing a flowchart regarding the correlation coefficient calculation processing in FIG.
- an endoscope to which the signal processing system according to the first embodiment of the present invention is applied includes an imaging lens system 100, a CCD 101, an illumination lens system 102, an illumination light source 103, an optical fiber 104, an amplification unit ( (Indicated in the figure as Gain.) 105, A / D converter 106, buffer 107, interpolation unit 108, WB unit 109, photometric evaluation unit 110, signal processing unit 111, calculation unit 112, switching unit 113, basis vector ROM 114, system A spectral characteristic ROM 115, a normalization unit 116, an output unit 117, a control unit 118, and an external I / F unit 119 are provided.
- a thick solid line arrow indicates the direction of the video signal
- a thin solid line arrow indicates the direction of the control signal
- a broken line arrow indicates the direction of other signals (the same applies to the other figures). is there).
- An imaging lens system 100, a CCD 101, and an illumination lens system 102 are arranged at the distal end portion of the endoscope that is inserted into the body of the subject.
- the illumination light source 103 is disposed, for example, on the rear end side of the endoscope, and the illumination light from the illumination light source 103 is guided to the endoscope front end portion via the optical fiber 104 and passes through the illumination lens system 102. Irradiates a subject (not shown).
- the CCD 101 images the illuminated subject, and the video signal obtained by the imaging is amplified by the amplification unit 105 and then converted into a digital signal by the A / D converter 106.
- the digital video signal from the A / D converter 106 is transferred to the interpolation unit 108 via the buffer 107.
- the buffer 107 is also connected to the WB unit 109 and the photometric evaluation unit 110.
- the WB unit 109 is connected to the amplification unit 105, and the photometric evaluation unit 110 is connected to the illumination light source 103 and the amplification unit 105.
- the interpolation unit 108 is connected to the signal processing unit 111 and the calculation unit 112.
- the signal processing unit 111 is connected to the switching unit 113.
- the basis vector ROM 114 and the system spectral characteristic ROM 115 are connected to the calculation unit 112.
- the calculation unit 112 is connected to the switching unit 113 via the normalization unit 116.
- the switching unit 113 is connected to an output unit 117 such as a liquid crystal display.
- the control unit 118 such as a microcomputer includes the amplification unit 105, the A / D converter 106, the interpolation unit 108, the WB unit 109, the photometric evaluation unit 110, the signal processing unit 111, the calculation unit 112, the switching unit 113, and the normalization unit. 116 and the output unit 117 are bidirectionally connected.
- An external I / F unit 119 having an interface for performing settings such as a power switch, a shutter button, and switching of various modes at the time of imaging is also connected to the control unit 118 bidirectionally.
- the imaging mode is entered by pressing the shutter button.
- the external I / F unit 119 includes, for example, an identification target selection unit that selects one subject from a plurality of subjects to be identified, a plurality of color imaging systems, and a plurality of illumination lights. It functions as a color imaging system selection unit that selects one color imaging system and one illumination light.
- a video signal obtained by imaging with the CCD 101 is continuously output from the CCD 101 as an analog signal at predetermined time intervals.
- a plurality of continuously output video signals are simply referred to as video signals, and a video signal for one image is referred to as a frame signal.
- 1/30 seconds hereinafter referred to as one frame time
- the predetermined time interval is assumed as the predetermined time interval.
- the CCD 101 a single plate CCD having a Bayer type primary color filter 120 as shown in FIG.
- the Bayer type has 2 ⁇ 2 pixels as a basic unit, in which the red (R) color filter 121R and the blue (B) color filter 121B are arranged one by one, and the green (G) color filter 121G is arranged by two pixels. It is a thing.
- the analog signal from the CCD 101 is amplified by a predetermined amount by the amplifying unit 105, converted into a digital signal by the A / D converter 106, and transferred to the buffer 107.
- the buffer 107 can record a signal of one frame, and is sequentially overwritten from an old frame signal as imaging progresses.
- the frame signal in the buffer 107 is intermittently transferred to the WB unit 109 and the photometric evaluation unit 110 at the predetermined time interval based on the control of the control unit 118.
- the WB unit 109 calculates a white balance coefficient by integrating signals of a predetermined level such as an intermediate level for each color signal corresponding to the color filters 121R, 121G, and 121B. Then, the calculated white balance coefficient is transferred to the amplifying unit 105.
- the amplification unit 105 performs white balance adjustment by multiplying the amplification factor by a white balance coefficient that is different for each color signal. Further, the photometric evaluation unit 110 controls the light amount of the illumination light source 103, the amplification factor of the amplification unit 105, and the like so as to achieve proper exposure.
- the interpolation unit 108 reads a single-plate frame signal from the buffer 107 based on the control of the control unit 118, and generates a three-plate frame signal by a known interpolation process.
- the generated three-plate frame signals are sequentially transferred to the signal processing unit 111 and the calculation unit 112 in units of frame signals.
- Subsequent signal processing unit 111, calculation unit 112, and normalization unit 116 are processed synchronously in units of one frame signal based on the control of control unit 118.
- the signal processing unit 111 performs known gradation processing and enhancement processing on the frame signal transferred from the interpolation unit 108 based on the control of the control unit 118, and transfers the processed frame signal to the switching unit 113.
- the basis vector ROM 114 stores a dedicated basis vector based on the known spectral characteristics of each of the subjects to be identified.
- the system spectral characteristic ROM 115 stores spectral characteristics related to each of a plurality of color imaging systems and spectral characteristics related to a plurality of illumination lights used at the time of imaging.
- the spectral characteristics related to the color imaging system mean the spectral sensitivity characteristics of the CCD 101 taking into account the spectral transmittance characteristics of the imaging lens system 100.
- the spectral characteristic related to the illumination light means a spectral luminance characteristic of the illumination light source 103 in consideration of the spectral transmittance characteristics of the transfer optical fiber 104 and the illumination lens system 102.
- the calculation unit 112 Based on the control of the control unit 118 according to the imaging conditions set via the external I / F unit 119, the calculation unit 112 obtains a basis vector from the basis vector ROM 114, and a spectral characteristic from the system spectral characteristic ROM 115, respectively. Read. That is, from the basis vector ROM 114, a dedicated basis vector based on the known spectral characteristics of one subject to be identified and the known spectral characteristics of a non-identification subject that is a subject other than the one subject to be identified. And a dedicated basis vector based on.
- the calculation unit 112 uses the read dedicated base vector, the spectral characteristics related to the color imaging system, and the spectral characteristics related to the illumination light with respect to the frame signal transferred from the interpolation unit 108.
- a weighting coefficient related to the dedicated base vector of the subject to be identified is calculated.
- the calculated weight coefficient of the dedicated basis vector takes a value proportional to the presence of the subject to be identified as will be described later, and is transferred to the normalization unit 116.
- the normalization unit 116 performs normalization processing based on the control of the control unit 118 so that the weighting coefficient transferred from the calculation unit 112 matches the signal level of the video signal. That is, since the weighting coefficient calculated by the calculation unit 112 takes a value of “0” to “1”, for example, if the signal level is 8 bits, it is normalized to a value of “0” to “255”. . Then, the weighting coefficient after the normalization process is transferred to the switching unit 113 as a frame signal.
- the switching unit 113 is either a normal frame signal transferred from the signal processing unit 111 or a frame signal related to the presence of an identification target transferred from the normalization unit 116. Select.
- the selected frame signal is transferred to the output unit 117 and output.
- the switching unit 113, the normalization unit 116, and the output unit 117 function as an output signal calculation unit that calculates an output signal as an identification result of a subject to be identified based on, for example, a weighting factor related to a dedicated basis vector.
- the frame signal from the normalization unit 116 is output as a monochrome signal. For example, if the output unit 117 is a display monitor, the transferred frame signal is displayed.
- the output unit 117 is not limited to a display monitor, and a form in which frame signals are sequentially recorded and stored in a recording medium such as a hard disk or a memory card is also possible.
- the calculation unit 112 includes a data selection unit 200, an integration unit 201, a buffer 202, an inverse matrix calculation unit 203, a buffer 204, a coefficient selection unit 205, and a multiplication / addition unit 206.
- the basis vector ROM 114 and the system spectral characteristic ROM 115 are connected to the data selection unit 200.
- the data selection unit 200 is connected to the coefficient selection unit 205 via the integration unit 201, the buffer 202, the inverse matrix calculation unit 203, and the buffer 204.
- the coefficient selection unit 205 and the interpolation unit 108 are connected to the multiplication / addition unit 206.
- the multiplication / addition unit 206 is connected to the normalization unit 116.
- the control unit 118 is bi-directionally connected to the data selection unit 200, the integration unit 201, the inverse matrix calculation unit 203, the coefficient selection unit 205, and the multiplication / addition unit 206.
- the data selection unit 200 receives, from the control unit 118, information on the subject to be identified in the imaging condition set via the external I / F unit 119. Based on the information, a plurality of dedicated base vectors including the dedicated base vectors based on the known spectral characteristics of the subject to be identified are read from the base vector ROM 114. As described above, the basis vector ROM 114 and the data selection unit 200 function as a basis vector acquisition unit that acquires a dedicated basis vector based on the known spectral characteristics of the subject to be identified, for example. In this embodiment, since the CCD 101 is assumed to be a Bayer type including three color filters 121R, 121G, and 121B, the total number of dedicated base vectors is 3. As one of the three types of dedicated basis vectors, a dedicated basis vector based on the known spectral characteristics of the subject to be set as the identification target is used.
- the dedicated basis vectors As an example of the dedicated basis vectors, three types of dedicated basis vectors (O1 ( ⁇ ), O2 ( ⁇ ), and O3 ( ⁇ )) are shown in FIG. Note that ⁇ means a visible region having a wavelength of 380 to 780 nm, for example.
- the dedicated basis vector (O1 ( ⁇ )) is based on the spectral reflectance characteristic of oxyhemoglobin
- the dedicated basis vector (O2 ( ⁇ )) is based on the spectral reflectance characteristic of deoxyhemoglobin. is there. Since these oxyhemoglobin and deoxyhemoglobin are contained in a large amount in the blood vessel site, they are important for diagnosis in an endoscope.
- the dedicated basis vector (O3 ( ⁇ )) is based on the spectral luminance characteristic of autofluorescence of collagen that is a main subject in fluorescence observation.
- Oxyhemoglobin is abundant in arteries and deoxyhemoglobin is abundant in veins. Thus, for example, when observing an artery, oxyhemoglobin is designated via the external I / F unit 119 as a subject to be identified. As a result, the data selection unit 200 reads three dedicated basis vectors including at least the dedicated basis vector (O1 ( ⁇ )) of oxyhemoglobin.
- the other two dedicated basis vectors are the dedicated basis vectors of the non-identification subject, for example, the deoxyhemoglobin dedicated basis vector (O2 ( ⁇ )) and the collagen dedicated basis vector (O3 ( ⁇ )).
- other dedicated basis vectors stored in the basis vector ROM 114 may be used.
- the data selection unit 200 further receives, from the control unit 118, information on the color imaging system and illumination light under the imaging conditions set via the external I / F unit 119. Based on the above information, the spectral characteristics of the imaging system including the spectral characteristics relating to the color imaging system used for imaging the subject from the system spectral characteristics ROM 115 and the spectral characteristics relating to the illumination light used when imaging the subject by the color imaging system are obtained. Read. As described above, the system spectral characteristic ROM 115 and the data selection unit 200 function as, for example, a system spectral characteristic acquisition unit that acquires spectral characteristics of the imaging system.
- FIG. 5 shows a spectral luminance characteristic (I ( ⁇ )) of a xenon light source as an example of a spectral characteristic related to illumination light used for imaging the subject.
- FIG. 6 shows spectral sensitivity characteristics (SR ( ⁇ ), SG ( ⁇ ), SG) of a color imaging system including three color filters 121R, 121G, and 121B. SB ( ⁇ )).
- the data selection unit 200 reads the read dedicated base vectors (O1 ( ⁇ ), O2 ( ⁇ ), O3 ( ⁇ )), the spectral luminance characteristic (I ( ⁇ )) of the light source, and the spectral sensitivity characteristic ( SR ( ⁇ ), SG ( ⁇ ), and SB ( ⁇ )) are transferred to the integrating unit 201.
- the accumulating unit 201 calculates a system matrix M related to a 3 ⁇ 3 size imaging system represented by the following equation (1).
- the data selection unit 200 and the integration unit 201 function as a matrix calculation unit that calculates a system matrix related to the imaging system, for example.
- the system matrix M calculated by the accumulating unit 201 is transferred to the buffer 202 and stored therein.
- the inverse matrix calculation unit 203 reads the system matrix M from the buffer 202 under the control of the control unit 118 and calculates an inverse matrix M ⁇ 1 of the system matrix M.
- the calculated inverse matrix M ⁇ 1 is transferred to the buffer 204 and stored therein.
- the dedicated basis vectors (O1 ( ⁇ ), O2 ( ⁇ ), O3 ( ⁇ )), the weighting coefficients (w1, w2, w3) can be obtained.
- i and j are the coordinates in the x and y directions of the frame signal
- m is each element of the inverse matrix M ⁇ 1 of the system matrix M, that is, the coefficient of the inverse matrix M ⁇ 1 . means.
- the coefficient selection unit 205 selects elements m 11 , m 12 , and m 13 of the inverse matrix M ⁇ 1 of the system matrix M from the buffer 204 based on the control of the control unit 118, and multiplies and adds the unit 206. Forward to.
- the multiplication / addition unit 206 receives elements m 11 , m 12 , m 13 of the inverse matrix M ⁇ 1 of the system matrix M from the coefficient selection unit 205, and then from the interpolation unit 108.
- a frame signal is read in units of R, G, and B pixels. Based on the following equation (3), a weighting coefficient (w1 ij ) related to the dedicated base vector (O1 ( ⁇ )) of the subject to be identified is obtained.
- the weight coefficient (w1 ij ) is a contribution to the dedicated base vector (O1 ( ⁇ )) of the subject to be identified
- the weight coefficient (w1 ij ) takes a value proportional to the presence of oxyhemoglobin. That is, when oxyhemoglobin is present, the value is high, and when it is not present, the value is low. Therefore, oxyhemoglobin can be identified by converting the weight coefficient (w1 ij ) into a video signal.
- the weighting coefficient (w1 ij ) calculated by the multiplication / addition unit 206 is sequentially transferred to the normalization unit 116 and subjected to normalization processing as described above. Then, the weighting factor after the normalization process is transferred to the output unit 117 which is a display monitor, for example, via the switching unit 113 and displayed as an output signal which is an identification result of the subject whose spectral characteristics are known. It will be.
- the identification target is oxyhemoglobin, but it is needless to say that the present invention is not limited to this.
- the subject to be identified may be switched to deoxyhemoglobin as necessary via the external I / F unit 119.
- a weight coefficient (w1 ij , w2 ij ) for both oxyhemoglobin and deoxyhemoglobin it is possible to obtain a weight coefficient (w1 ij , w2 ij ) for both oxyhemoglobin and deoxyhemoglobin.
- a plurality of multiplication / addition units 206 may be provided, or a combination of one multiplication / addition unit 206 and a buffer for storing the calculated weighting coefficient may be used.
- a configuration for selecting and displaying via the external I / F unit 119, a configuration for combining and displaying both, and pseudo-coloring both for independent display Configuration can be set freely.
- a dedicated base vector based on the known spectral characteristics of a subject to be identified whose spectral characteristics are known and a subject including the subject to be identified are used for imaging. From the spectral characteristics related to the color imaging system and the spectral characteristics related to the illumination light used when the subject is imaged by the color imaging system, a weighting coefficient is calculated for a dedicated base vector that takes a value proportional to the presence of the subject to be identified. Based on the weighting coefficient, an output signal is calculated as a result of identifying a subject to be identified whose spectral characteristics are known.
- a weighting factor that takes a value proportional to the presence of the subject to be identified is calculated. Can do. Therefore, it is not necessary to perform signal processing including an error unlike the approximation based on the conventional least square method. Therefore, it is possible to identify a subject to be identified with high reliability with less occurrence of errors due to signal processing.
- the processing speed and cost can be reduced.
- a dedicated basis vector based on the known spectral characteristics of the subject to be identified, and a dedicated basis vector based on the known spectral characteristics of the non-identified subject whose spectral characteristics are known but outside the identification target And are used. Therefore, it is possible to apply signal processing using a dedicated basis vector to a region other than the identification target, and it is possible to improve the degree of freedom in processing for calculating the output signal at the subsequent stage.
- a dedicated base vector of the subject to be identified is selected from a plurality of dedicated base vectors. Therefore, since the subject to be identified can be selected, the applicability as a system is improved, and it can be used for various purposes.
- a spectral characteristic to be used is selected from spectral characteristics related to a plurality of color imaging systems or spectral characteristics related to a plurality of illumination lights according to the selection. Therefore, since a color imaging system or illumination light can be selected, the applicability as a system is improved, and it can be used for various purposes.
- the inverse matrix of the system matrix based on the dedicated basis vector of the subject to be identified and the spectral characteristics of the imaging system is calculated, and the coefficient related to the dedicated basis vector of the subject to be identified is selected from the inverse matrix and selected.
- a weighting coefficient related to the dedicated base vector of the subject to be identified is calculated. Therefore, since the dedicated base vector, that is, the weighting coefficient related to the identification target, is calculated in the signal processing based on the known spectral characteristics of the subject to be identified and the spectral characteristics of the imaging system, the occurrence of errors due to the signal processing is small and reliable. Highly distinctive identification is possible.
- the system matrix includes a dedicated base vector for the non-identification subject, the region where the subject that is the identification target does not exist, that is, the existence region of the subject other than the identification target, if necessary. It becomes possible to apply signal processing using a dedicated basis vector, and the degree of freedom in processing for calculating the output signal at the subsequent stage is improved.
- the output signal is obtained by normalizing the weighting coefficient related to the dedicated basis vector based on the known spectral characteristic of the subject to be identified, a highly accurate output signal can be obtained regarding the presence of the identification target. Furthermore, since the output signal is obtained only by normalization processing, the processing speed can be increased and the cost can be reduced.
- the dedicated basis vectors are indicated by thick lines, and the general basis vectors are indicated by thin lines.
- the principal reflectance analysis such as Munsell color chart is principally analyzed as general basis vectors (O2 ( ⁇ ), O3 ( ⁇ )), and the top two basis vectors having a high contribution rate are selected. The case where what was memorize
- the dedicated basis vector (O1 ( ⁇ )) can be set from a plurality of dedicated basis vectors stored in the basis vector ROM 114 via the external I / F unit 119.
- a subject to be identified for example, oxyhemoglobin contained in a large amount of a blood vessel site that is important for diagnosis in an endoscope is selected.
- General-purpose basis vectors (O2 ( ⁇ ), O3 ( ⁇ )) are signals using the basis vectors for areas where there is no subject to be identified, that is, areas where subjects other than the identification target are present, as necessary. The processing can be applied, and can be used for the processing of calculating the output signal at the subsequent stage.
- FIG. 8 is a diagram showing three types of basis vectors when deoxyhemoglobin is used as the dedicated basis vector (O1 ( ⁇ )).
- the dedicated basis vectors are indicated by thick lines and the general basis vectors are indicated by thin lines.
- an inverse matrix of the system matrix is calculated based on the general-purpose base vector, the dedicated base vector, and the spectral characteristics of the imaging system, and a coefficient related to the dedicated base vector is selected from the inverse matrix, and based on the selected coefficient and the video signal.
- a weighting factor for the dedicated basis vector is calculated. Therefore, since the dedicated base vector, that is, the weighting coefficient related to the identification target, is calculated in the signal processing based on the known spectral characteristics of the subject to be identified and the spectral characteristics of the imaging system, the occurrence of errors due to the signal processing is small and reliable. Highly distinctive identification is possible.
- system matrix includes general-purpose basis vectors
- signal processing using general-purpose base vectors is applied to areas where there is no subject to be identified, that is, areas where subjects other than the identification target exist as needed. Therefore, the degree of freedom in the process of calculating the output signal at the subsequent stage is improved.
- a single CCD having a Bayer-type primary color filter 120 composed of R, G, and B color filters 121R, 121G, and 121B arranged on the front surface is assumed in the imaging system.
- the present invention can be applied to a single-plate CCD, a two-plate CCD, or a three-plate CCD in which a primary color filter composed of four color filters or a color difference line sequential complementary color filter is disposed on the front surface.
- the primary color filter 122 including four color filters has 2 ⁇ 2 pixels as a basic unit, and the basic units include four color filters 121R, 121Gr, and 121Gb of R, Gr, Gb, and B. , 121B are arranged one pixel at a time.
- the color difference line sequential complementary color filter 123 has 2 ⁇ 2 pixels as a basic unit, and the basic units include a cyan (Cy) color filter 121 Cy, a magenta (Mg) color filter 121 Mg, and a yellow color.
- the (Ye) color filter 121Ye and the green (G) color filter 121G are arranged pixel by pixel. However, the positions of the Mg color filter 121Mg and the G color filter 121G are reversed for each line.
- the spectral sensitivity characteristics (SCy ( ⁇ ), SMg ( ⁇ ), SYNe ( ⁇ ), SG ( ⁇ )) of the color imaging system when this color difference line sequential complementary filter 123 is used are, for example, as shown in FIG. become.
- the configuration using the single-plate image sensor in which the Bayer type primary color filter 120 or the color difference line sequential complementary color filter 123 is disposed on the front surface has high affinity with the conventional imaging system, and can be applied to many imaging systems.
- the spectral characteristics relating to the color imaging system and the spectral characteristics relating to the illumination light used for imaging the subject are recorded in the system spectral characteristics ROM 115, and the actual imaging is performed via the external I / F unit 119.
- a configuration is also possible that selects the characteristics to be used.
- the total number of dedicated base vectors and general-purpose base vectors is four.
- the system matrix M shown in the above equation (1) and the inverse matrix M ⁇ 1 of the system matrix M shown in the above equation (2) have a size of 4 ⁇ 4.
- the right side of the above equation (3) also has a form in which the fourth term is added.
- FIGS. 12 and 13 show examples of dedicated basis vectors and general-purpose basis vectors, which are a total of four.
- FIG. 12 includes a kind of dedicated basis vector (O1 ( ⁇ )) and three kinds of general-purpose basis vectors (O2 ( ⁇ ), O3 ( ⁇ ), and O4 ( ⁇ )).
- the dedicated basis vectors are indicated by thick lines, and the general basis vectors are indicated by thin lines.
- the general-purpose basis vectors are stored in the basis vector ROM 114 by performing principal component analysis on spectral reflectance characteristics such as Munsell color chart and selecting the top three basis vectors having a high contribution rate.
- the dedicated basis vector indicates the case of oxyhemoglobin shown in FIG.
- FIG. 13 includes two types of dedicated basis vectors (O1 ( ⁇ ), O2 ( ⁇ )) and two types of general-purpose basis vectors (O3 ( ⁇ ), O4 ( ⁇ )).
- the dedicated basis vectors are indicated by thick lines
- the general basis vectors are indicated by thin lines.
- the two types of dedicated basis vectors are the cases of oxyhemoglobin and deoxyhemoglobin shown in FIG.
- a dedicated basis vector based on all known spectral characteristics of the subject can be used as the basis vector.
- the present invention can be applied to both moving images and still images.
- the normalization unit 116 may be omitted from the configuration illustrated in FIG. 1, and the switching unit 113 may be replaced with the enhancement unit 124.
- the basic configuration is the same as in FIG. 1, and the same name and reference number are assigned to the same configuration. Only different parts will be described below.
- the signal processing unit 111 and the calculation unit 112 are connected to the enhancement unit 124.
- the emphasis unit 124 is connected to the output unit 117.
- the control unit 118 is bidirectionally connected to the emphasizing unit 124.
- a normal frame signal is transferred from the signal processing unit 111 to the enhancement unit 124, and a weighting coefficient related to a dedicated base vector based on the known spectral characteristics of the subject to be identified is transferred from the calculation unit 112.
- the enhancement unit 124 performs enhancement processing on the frame signal transferred from the signal processing unit 111 based on the weighting coefficient transferred from the calculation unit 112 based on the control of the control unit 118.
- the enhancement processing known edge enhancement processing and saturation enhancement processing are assumed, and these enhancement amounts are performed in a form proportional to the weighting factor.
- the frame signal after the enhancement process is transferred to the output unit 117.
- the enhancement unit 124 and the output unit 117 function as an output signal calculation unit that calculates an output signal as a result of identification of a subject to be identified based on, for example, a weight coefficient related to a dedicated basis vector.
- the enhancement process by performing the enhancement process from the weighting coefficient related to the dedicated basis vector based on the spectral characteristics of the subject to be identified, only the existence region of the subject to be identified such as oxyhemoglobin is emphasized and the recognition ability is improved. It becomes possible.
- the video signal that has undergone normal processing is also output for the region where the subject to be identified does not exist, that is, the region where the subject other than the identification target exists, the user can easily recognize the entire video signal. The operability for is improved.
- the total number of basis vectors is not limited to three as described in the second modification, and may be a number that matches the number of types of filters that pass video signals from the color imaging system.
- the present invention can be applied to both moving images and still images.
- the signal processing unit 111 and the normalization unit 116 are connected to the synthesis unit 125.
- the combining unit 125 is connected to the output unit 117.
- the control unit 118 is bidirectionally connected to the synthesis unit 125.
- a normal frame signal transferred from the signal processing unit 111 is transferred to the combining unit 125, and a frame signal related to the presence of the identification target is transferred from the normalizing unit 116.
- the synthesizing unit 125 synthesizes the frame signal related to the presence of the identification target transferred from the normalization unit 116 with respect to the frame signal transferred from the signal processing unit 111 based on the control of the control unit 118.
- the synthesizing unit 125 synthesizes the frame signal related to the presence of the identification target transferred from the normalization unit 116 with respect to the frame signal transferred from the signal processing unit 111 based on the control of the control unit 118.
- the synthesis process a process such as a known superimpose is assumed.
- the frame signal after the synthesis process is transferred to the output unit 117.
- the normalization unit 116, the synthesis unit 125, and the output unit 117 function as an output signal calculation unit that calculates an output signal as an identification result of a subject to be identified based on, for example, a weighting factor related to a dedicated basis vector. To do.
- the existence area of the subject to be identified is highly accurate.
- An output signal is obtained.
- synthesis processing is performed between video signals that have undergone normal processing, a video signal that has undergone normal processing is output even in a region where there is no subject to be identified, that is, a region where a subject other than the identification target exists. As a result, the entire video signal can be easily recognized, and the operability for the user is improved.
- the synthesizing unit 125 displays a window on a part of the screen and displays a frame signal from the signal processing unit 111 or a frame signal from the normalization unit 116 as a sub-screen. It is good also as what performs the synthetic
- all of the dedicated base vectors may be used, or at least the dedicated base vectors of the subject to be identified may be used, and the general-purpose base vectors may be used for others.
- the total number of basis vectors is not limited to three.
- the present invention can be applied to both moving images and still images.
- the switching unit 113 may be omitted from the configuration shown in FIG. 1, and a second output unit 126 different from the output unit 117 may be added.
- the basic configuration is the same as in FIG. 1, and the same name and reference number are assigned to the same configuration. Only different parts will be described below.
- the signal processing unit 111 is connected to the output unit 117.
- the normalization unit 116 is connected to the second output unit 126.
- the control unit 118 is bidirectionally connected to the second output unit 126.
- the signal processing unit 111 transfers a normal frame signal to the output unit 117 based on the control of the control unit 118.
- the output unit 117 displays a normal frame signal.
- the normalization unit 116 transfers a frame signal related to the presence of the identification target to the second output unit 126. In the second output unit 126, a frame signal relating to the presence of the identification target is displayed.
- the normalization unit 116, the output unit 117, and the second output unit 126 for example, an output signal calculation unit that calculates an output signal as an identification result of a subject to be identified based on a weighting factor related to a dedicated basis vector. Function as.
- the video signal related to the identification target by normalizing the weighting coefficient related to the dedicated basis vector based on the known spectral characteristics of the subject to be identified, a high-accuracy output signal regarding the presence of the identification target can be obtained. can get.
- the video signal that has undergone normal processing is also output independently, it is easy to recognize the entire video signal, and the operability for the user is improved.
- all of the dedicated base vectors may be used, or at least the dedicated base vectors of the subject to be identified may be used, and the general-purpose base vectors may be used for others.
- the total number of basis vectors is not limited to three.
- the present invention can be applied to both moving images and still images.
- the signal processing system includes the imaging lens system 100, CCD 101, illumination lens system 102, illumination light source 103, optical fiber 104, amplification unit 105, A / D converter 106, WB unit 109, and photometric evaluation.
- the configuration is integrated with the imaging unit including the unit 110. However, it is not necessary to be limited to such a configuration.
- a video signal captured by a separate imaging unit is recorded in an unprocessed Raw data form, and additional information regarding imaging conditions such as a subject to be identified, a color imaging system, and illumination light is recorded in the header part, Data stored in a recording medium such as a hard disk or a memory card can be read and processed.
- a recording medium such as a hard disk or a memory card
- the photometric evaluation unit 110 is omitted, and an input unit 127 and a header information analysis unit 128 are added.
- the basic configuration is the same as in FIG. 1, and the same name and reference number are assigned to the same configuration. Only different parts will be described below.
- the input unit 127 is connected to the buffer 107 and the header information analysis unit 128.
- the control unit 118 is bidirectionally connected to the input unit 127 and the header information analysis unit 128.
- the external I / F unit 119 such as a mouse or a keyboard
- signals and header information stored in a recording medium such as a hard disk or a memory card, or signals and headers received via a network Information
- the video signals are sequentially read one by one at predetermined time intervals, in this modification, at one frame time intervals.
- the signal from the input unit 127 is transferred to the buffer 107 and the header information is transferred to the header information analysis unit 128.
- the header information analysis unit 128 extracts information at the time of imaging from the header information and transfers it to the control unit 118.
- the subsequent processing is the same as in FIG.
- all of the dedicated base vectors may be used, or at least the dedicated base vectors of the subject to be identified may be used, and the general-purpose base vectors may be used for others.
- the total number of basis vectors is not limited to three, and may be adjusted to the number of types of filters that pass the acquired video signal.
- the present invention can be applied to both moving images and still images.
- Modification 7 Furthermore, in the first embodiment, processing based on hardware is assumed, but it is not necessary to be limited to such a configuration.
- the video signal from the CCD 101 is converted into raw data form that has not been processed, and the control unit 118 outputs imaging conditions such as a subject to be identified, a color imaging system, and illumination light as header information.
- imaging conditions such as a subject to be identified, a color imaging system, and illumination light.
- a configuration in which header information is input to a computer (not shown) and processed by software is also possible.
- the computer first inputs a video signal and header information regarding imaging conditions such as an identification target, a subject, a color imaging system, and illumination light (step S101).
- imaging conditions such as an identification target, a subject, a color imaging system, and illumination light
- a video signal from a Bayer type single-plate CCD including three color filters 121R, 121G, and 121B is processed.
- dedicated base vectors (O1 ( ⁇ )) and general-purpose base vectors (O2 ( ⁇ ), O3 ( ⁇ )) are used as in the first modification will be described as an example.
- step S101 a plurality of dedicated basis vectors and a plurality of general-purpose basis vectors are input (step S102), and spectral luminance characteristics of a plurality of light sources and spectral sensitivity characteristics of a plurality of color imaging systems are input (step S102).
- step S102 a plurality of dedicated basis vectors and a plurality of general-purpose basis vectors are input (step S102), and spectral luminance characteristics of a plurality of light sources and spectral sensitivity characteristics of a plurality of color imaging systems are input (step S102).
- step S103 the plurality of basis vectors and the plurality of spectral characteristics are input by reading from a recording medium provided in the computer or a removable recording medium, or by reading through a network.
- step S104 a predetermined coefficient of the inverse matrix M ⁇ 1 of the system matrix M used for calculating the weighting coefficient related to the dedicated basis vector (O1 ( ⁇ )), that is, the elements m 11 and m 12 is calculated by a calculation process that will be described in detail later.
- M 13 is calculated (step S104).
- frame signals are sequentially extracted from the input video signals (step S105), and three-plate frame signals are generated by a known interpolation process (step S106).
- signal processing such as known gradation processing and enhancement processing is performed on the frame signal (step S107).
- step S108 the weighting coefficient (w1 ij ) related to the dedicated basis vector (O1 ( ⁇ )) is calculated as shown in the above equation (3) (step S108).
- step S109 a frame signal relating to the presence of the identification target is generated (step S109).
- step S110 either the normal frame signal from step S107 or the frame signal related to the presence of the subject to be identified from step S109 is switched (step S110), and the display signal (not shown) connected to the computer is connected to the frame signal. (Step S111). Thereafter, it is determined whether or not all the frame signals have been completed (step S112). If not completed, the process returns to step S105, and if completed, the process ends.
- step S104 The calculation process in step S104 is performed as shown in FIG. First, among the plurality of dedicated basis vectors and the plurality of general-purpose basis vectors input in step S102, the spectral luminance characteristics of the plurality of light sources and the spectral sensitivity characteristics of the plurality of color imaging systems input in step S103, Data to be used is selected (step S201). This is based on the subject to be identified in the header information input in step S101, the color imaging system, the imaging conditions such as illumination light, and the like. A spectral sensitivity characteristic is selected. For example, a dedicated basis vector (O1 ( ⁇ )) and general-purpose basis vectors (O2 ( ⁇ ), O3 ( ⁇ )) as shown in FIG. 7, a spectral luminance characteristic of a light source as shown in FIG. 5, as shown in FIG. Select spectral sensitivity characteristics of a color imaging system.
- step S202 the system matrix M shown in the above equation (1) is calculated (step S202), and the inverse matrix M ⁇ 1 of the system matrix M is calculated (step S203). Then, the elements m 11 , m 12 , and m 13 of the inverse matrix M ⁇ 1 required for calculating the weighting coefficient (w1 ij ) of the dedicated basis vector (O1 ( ⁇ )) are selected as the predetermined coefficients (step S204). The elements m 11 , m 12 , m 13 of the selected inverse matrix M ⁇ 1 are output (step S205).
- step S112 the determination in step S112 is omitted and the process ends.
- An endoscope to which the signal processing system according to the second embodiment of the present invention is applied, as shown in FIG. 20, has an imaging system ROM 129, an illumination system ROM 130, a second calculation in the configuration of the first embodiment shown in FIG. A part 131, a second normalization part 132, and a pseudo color part 133 are added.
- the basic configuration is the same as that of the first embodiment, and the same name and reference number are assigned to the same configuration. Only the different parts will be described below.
- the imaging system ROM 129 incorporated in the endoscope body and the illumination system ROM 130 incorporated in the illumination body are connected to the control unit 118.
- the interpolation unit 108 is connected to the signal processing unit 111, the calculation unit 112, and the second calculation unit 131.
- the basis vector ROM 114 and the system spectral characteristic ROM 115 are connected to the calculation unit 112 and the second calculation unit 131.
- the second calculation unit 131 is connected to the second normalization unit 132.
- the normalization unit 116 and the second normalization unit 132 are connected to the pseudo color unit 133.
- the pseudo color unit 133 is connected to the switching unit 113.
- the control unit 118 is bidirectionally connected to the second calculation unit 131, the second normalization unit 132, and the pseudo color unit 133.
- the control unit 118 sets a subject to be identified through the external I / F unit 119, a color imaging system based on information from the imaging system ROM 129, and an imaging condition such as illumination light based on information from the illumination system ROM 130. Set each.
- the imaging system and the illumination system are specified as a ROM for the imaging system and the illumination system.
- the imaging system ROM 129 has spectral characteristics related to a color imaging system used for imaging a subject including the subject to be identified, and the illumination system ROM 130 has spectral characteristics related to illumination light used when imaging the subject by the color imaging system. If each is stored, the system spectral characteristic ROM 115 may be omitted, or a new imaging system or illumination system that is not stored in the system spectral characteristic ROM 115 may be dealt with.
- the interpolation unit 108 reads a single-plate frame signal from the buffer 107 based on the control of the control unit 118, and generates a three-plate frame signal by a known interpolation process.
- the generated three-plate frame signals are sequentially transferred to the signal processing unit 111, the calculation unit 112, and the second calculation unit 131 in units of frame signals.
- the subsequent signal processing unit 111, calculation unit 112, normalization unit 116, second calculation unit 131, second normalization unit 132, and pseudo color unit 133 are synchronized in units of one frame signal based on the control of the control unit 118. Processing is done.
- the signal processing unit 111 performs known gradation processing and enhancement processing on the frame signal transferred from the interpolation unit 108 based on the control of the control unit 118, and transfers the processed frame signal to the switching unit 113.
- the calculation unit 112 calculates a weighting factor for a dedicated base vector based on an object to be identified, for example, oxyhemoglobin shown in FIG. 7, and transfers the weighting factor to the normalization unit 116.
- the normalization unit 116 Based on the control of the control unit 118, the normalization unit 116 performs normalization processing so that the weighting coefficient transferred from the calculation unit 112 matches the signal level of the video signal, and the processed weighting coefficient relates to the presence of the identification target.
- the frame signal is transferred to the pseudo color unit 133.
- the second calculation unit 131 reads from the basis vector ROM 114 a dedicated basis vector based on the known spectral characteristics of the subject to be identified shown in FIG. 7 and the spectral characteristics of an arbitrary subject. Read general basis vectors used to estimate. Further, based on the control of the control unit 118, from the system spectral characteristic ROM 115, the spectral characteristic related to the illumination light used when the subject is imaged by the color imaging system shown in FIG. 5, and the spectral characteristic related to the color imaging system shown in FIG.
- a weighting coefficient for the general-purpose base vector is calculated for the frame signal transferred from the interpolation unit 108 using the dedicated base vector, the general-purpose base vector, the spectral characteristics for the color imaging system, and the spectral characteristics for the illumination light.
- the calculated weight coefficient of the general-purpose base vector is transferred to the second normalization unit 132.
- the second normalization unit 132 performs normalization processing based on the control of the control unit 118 so that the weighting coefficient transferred from the second calculation unit 131 matches the signal level of the video signal.
- the processed weight coefficient is transferred to the pseudo color unit 133 as a frame signal related to an area where no identification target exists, that is, an area where a subject other than the identification target exists.
- the pseudo color unit 133 performs pseudo color processing from the frame signal transferred from the normalization unit 116 and the frame signal transferred from the second normalization unit 132 based on the control of the control unit 118.
- the pseudo-colorization processing the frame signal related to the presence of the identification target transferred from the normalization unit 116 is converted into an R signal, and the identification target area transferred from the second normalization unit 132 does not exist, that is, the identification This is implemented by assigning frame signals relating to the existence area of subjects other than the target to the G and B signals.
- the frame signal subjected to the pseudo color process is transferred to the switching unit 113.
- the switching unit 113 selects one of a normal frame signal transferred from the signal processing unit 111 and a frame signal subjected to pseudo color processing transferred from the pseudo color unit 133, For example, the image is transferred to the output unit 117 which is a display monitor and displayed.
- the output unit 117 is not limited to a display monitor, and a form in which frame signals are sequentially recorded and stored in a recording medium such as a hard disk or a memory card is also possible.
- the normalization unit 116, the second normalization unit 132, the pseudo colorization unit 133, the switching unit 113, and the output unit 117 for example, based on the weighting factor related to the dedicated base vector, identify the subject to be identified. It functions as an output signal calculation unit that calculates the output signal.
- the second calculation unit 131 includes a data selection unit 210, an integration unit 211, a buffer 212, an inverse matrix calculation unit 213, a buffer 214, a coefficient selection unit 215, and a multiplication / addition unit 216.
- the basis vector ROM 114 and the system spectral characteristic ROM 115 are connected to the data selection unit 210.
- the data selection unit 210 is connected to the coefficient selection unit 215 via the integration unit 211, the buffer 212, the inverse matrix calculation unit 213, and the buffer 214.
- the coefficient selection unit 215 and the interpolation unit 108 are connected to the multiplication / addition unit 216.
- the multiplication / addition unit 216 is connected to the second normalization unit 132.
- the control unit 118 is bi-directionally connected to the data selection unit 210, the integration unit 211, the inverse matrix calculation unit 213, the coefficient selection unit 215, and the multiplication / addition unit 216.
- the data selection unit 210 receives information on the subject to be identified set via the external I / F unit 119 from the control unit 118. Then, based on the information, a dedicated basis vector (O1 ( ⁇ )) based on the spectral characteristics of the subject to be identified shown in FIG. 7 from the basis vector ROM 114 and a general-purpose base vector used for estimating the spectral characteristics of an arbitrary subject. (O2 ( ⁇ ), O3 ( ⁇ )) is read. As described above, the basis vector ROM 114 and the data selection unit 210 function as a basis vector acquisition unit that acquires a dedicated basis vector based on known spectral characteristics of a subject to be identified, for example.
- the data selection unit 210 receives from the control unit 118 information on the color imaging system set based on information from the imaging system ROM 129 and information on illumination light set based on information from the illumination system ROM 130. Based on the above information, the spectral luminance characteristic (I ( ⁇ )) of the light source shown in FIG. 5 from the system spectral characteristic ROM 115 and the spectral characteristics (SR ( ⁇ ), SG ( ⁇ )) relating to the color imaging system shown in FIG. , SB ( ⁇ )). In this way, the imaging system ROM 129, the illumination system ROM 130, and the data selection unit 210 function as a system spectral characteristic acquisition unit that acquires spectral characteristics of the imaging system, for example.
- the dedicated base vector (O1 ( ⁇ )), general-purpose base vectors (O2 ( ⁇ ), O3 ( ⁇ )), the spectral luminance characteristic of the light source (I ( ⁇ )), and the spectral sensitivity characteristic of the color imaging system (SR ( ⁇ )) , SG ( ⁇ ), SB ( ⁇ )) are transferred to the integrating unit 211.
- the integration unit 211 calculates a system matrix M related to the 3 ⁇ 3 size imaging system represented by the above equation (1).
- the data selection unit 210 and the integration unit 211 function as a matrix calculation unit that calculates a system matrix related to the imaging system, for example.
- the calculated system matrix M is transferred to the buffer 212 and stored.
- the inverse matrix calculation unit 213 reads the system matrix M from the buffer 212 under the control of the control unit 118, and calculates the inverse matrix M- 1 .
- the calculated inverse matrix M ⁇ 1 is transferred to the buffer 214 and stored.
- the coefficient selection unit 215 receives the elements m 21 , m 22 , m 23 , m 31 , m 32 , and m 33 of the inverse matrix M ⁇ 1 of the system matrix M from the buffer 214 under the control of the control unit 118. This is selected and transferred to the multiplier / adder 216. Based on the control of the control unit 118, the multiplication / addition unit 216 transfers the elements m 21 , m 22 , m 23 , m 31 , m 32 , m 33 of the inverse matrix M ⁇ 1 of the system matrix M from the coefficient selection unit 205. After that, the frame signal is read from the interpolating unit 108 in units of R, G, and B pixels. Then, weighting coefficients (w2 ij , w3 ij ) related to the general basis vectors (O2 ( ⁇ ), O3 ( ⁇ )) are obtained based on the following expressions (4) and (5).
- the weight coefficients (w2 ij , w3 ij ) are transferred to the second normalization unit 132.
- the configuration of the second calculation unit 131 is basically the same as that of the calculation unit 112 shown in the first embodiment. For this reason, it is possible to integrate both of them and to calculate all the weight coefficients related to the dedicated base vector and the general-purpose base vector by one calculation unit.
- the second calculation unit 131 may use a dedicated base vector of a subject whose spectral characteristics are known together with a dedicated base vector of a subject to be identified, instead of the generalized normalized vector. good.
- the dedicated basis vector based on the known spectral characteristic of the subject to be identified whose spectral characteristics are known, and the known spectral of the non-identification subject that is not the identification target.
- a dedicated basis vector based on characteristics or a general-purpose basis vector used for estimating spectral characteristics of an arbitrary subject, spectral characteristics related to a color imaging system used for imaging a subject including the subject to be identified, and the subject of the subject by the color imaging system
- a weighting factor related to a dedicated base vector of the subject to be identified which takes a value proportional to the presence of the subject to be identified, from a spectral characteristic related to illumination light used at the time of imaging, and a dedicated base vector or general-purpose base for a subject that is not identified
- a weighting factor for the vector is calculated, and an output signal is calculated based on the weighting factor of both.
- a weighting factor that takes a value proportional to the presence of the subject to be identified is calculated. Therefore, it is not necessary to perform signal processing including an error unlike the approximation based on the conventional least square method. Therefore, it is possible to identify a subject to be identified with high reliability with less occurrence of errors due to signal processing. In addition, since wide-band normal illumination light is used, the influence of noise can be suppressed, and stable identification becomes possible. Furthermore, since the output signal is directly calculated from the weighting coefficient related to the dedicated basis vector, the processing speed can be increased and the cost can be reduced.
- the video signal related to the identification target is obtained by normalizing the weighting coefficient related to the dedicated basis vector based on the known spectral characteristics of the subject to be identified, a high-accuracy output signal can be obtained regarding the presence of the identification target.
- the present invention can be applied to an imaging system including four primary color filters 122 or a color difference line sequential complementary color filter 123, and a two-plate or three-plate CCD.
- the pseudo colorizing unit 133, the switching unit 113, and the output unit 117 relate to the dedicated basis vectors of the subject to be identified, each normalized by the normalizing unit 116. It functions as a pseudo color signal calculation unit that calculates a pseudo color signal as an output signal from the weight coefficient and a weight coefficient related to a dedicated base vector of a subject that is not to be identified.
- the present invention can be applied to both moving images and still images.
- a signal obtained by normalizing the weighting coefficient related to the dedicated base vector or the weighting coefficient related to the general-purpose base vector based on the known spectral characteristics of the subject other than the identification target since pseudo color display is performed, it is easy to recognize the entire video signal, and the operability for the user is improved.
- a dedicated base vector based on the known spectral characteristics of the subject to be identified and a dedicated base vector based on the known spectral characteristics of the subject not identified and the spectral characteristics related to the color imaging system used for imaging the subject Based on the spectral characteristics related to the illumination light to be used, a weighting factor related to the dedicated base vector of the subject to be identified and a weighting factor related to the dedicated base vector of the subject not to be identified are calculated, and an output signal is calculated based on both weighting factors. Therefore, since a dedicated basis vector based on the known spectral characteristics of the subject to be identified is used, it is possible to identify with high reliability with little occurrence of error due to signal processing. In addition, it is possible to apply signal processing using a dedicated base vector of a non-identification subject to a region other than the identification target, and the degree of freedom in processing for calculating an output signal is improved.
- FIG. 22 is a configuration in which the normalization unit 116 and the second normalization unit 132 are omitted from the configuration shown in FIG. 20 and a spectral characteristic estimation unit 134, a correction unit 135, and a conversion unit 136 are added.
- the basic configuration is the same as in FIG. 20, and the same name and reference number are assigned to the same configuration. Only different parts will be described below.
- the second calculation unit 131 is connected to the spectral characteristic estimation unit 134.
- the calculation unit 112 and the spectral characteristic estimation unit 134 are connected to the correction unit 135.
- the correction unit 135 is connected to the switching unit 113 via the conversion unit 136.
- the system spectral characteristic ROM 115 is connected to the calculation unit 112, the second calculation unit 131, and the conversion unit 136.
- the control unit 118 is bidirectionally connected to the spectral characteristic estimation unit 134, the correction unit 135, and the conversion unit 136.
- the spectral characteristic estimation unit 134 estimates the spectral characteristic (O ij ( ⁇ )) of the subject on a pixel-by-pixel basis according to the following equation (6) based on the control of the control unit 118.
- the spectral characteristic O ij ( ⁇ ) of the subject calculated over the entire visible range based on the above equation (6) is transferred to the correction unit 135.
- the calculation unit 112 calculates the weighting coefficient (w1 ij ) related to the dedicated basis vector (O1 ( ⁇ )) and transfers it to the correction unit 135 together with the dedicated basis vector (O1 ( ⁇ )).
- the correction unit 135 uses the spectral characteristic (O ij ( ⁇ )) of the subject transferred from the spectral characteristic estimation unit 134 based on the weighting coefficient (w1 ij ) transferred from the calculation unit 112. to correct.
- the weighting coefficient (w1 ij ) is a value proportional to the presence of the subject to be identified, for example, oxyhemoglobin. For this reason, in a region such as a blood vessel having a large weight coefficient (w1 ij ), the spectral characteristic calculated from the general-purpose base vector shown in the above equation (6) is replaced with the spectral characteristic of the identification target itself, thereby improving the accuracy of the spectral characteristic. Can be improved.
- the correction unit 135 is corrected by mixing the spectral characteristic (O ij ( ⁇ )) of the subject and the spectral characteristic of the identification target itself based on the weighting coefficient (w1 ij ) as shown in the following equation (7).
- Spectral characteristics (O ′ ij ( ⁇ )) are obtained.
- the dedicated basis vector (O1 ( ⁇ )) is based on the spectral characteristics of the identification target itself as shown in FIG. 4, it can be used as the spectral characteristics of the identification target.
- the weighting coefficient (w1 ij ) can take values from “0” to “1”.
- the correction unit 135 transfers the spectral characteristic (O ′ ij ( ⁇ )) corrected based on the equation (7) and the weighting coefficient (w1 ij ) transferred from the calculation unit 112 to the conversion unit 136.
- the conversion unit 136 reads the spectral characteristics (SR ( ⁇ ), SG ( ⁇ ), SB ( ⁇ )) regarding the color imaging system as shown in FIG. Thereafter, based on the corrected spectral characteristic (O ′ ij ( ⁇ )) transferred from the correction unit 135, three signals R ij , G ij , and B ij are calculated by the following equation (8). .
- K in the above equation (8) is a correction for matching the three signals R ij , G ij , and B ij with the signal level of the video signal (for example, “0” to “255” if the signal level is 8 bits).
- gain () is a function for generating a gain for enhancement processing, and has a characteristic shown in FIG. 23, for example.
- the conversion unit 136 transfers the three signals R ij , G ij , and B ij calculated based on the above equation (8) to the switching unit 113 as frame signals.
- the frame signal generated by the conversion unit 136 is a signal in which R is emphasized in proportion to the presence of the identification target, and the discrimination is improved.
- an area where a subject to be identified does not exist, that is, an area where a subject other than the identification target exists is generated by an equivalent process, a video signal having excellent uniformity and continuity with the identification target and high visibility can be obtained. can get.
- the spectral characteristic estimation unit 134 the correction unit 135, the conversion unit 136, the switching unit 113, and the output unit 117, for example, output signals as identification results of the subject to be identified based on the weighting coefficient related to the dedicated base vector. It functions as an output signal calculation unit for calculating.
- the spectral characteristics of the subject on the entire screen are estimated from the general-purpose base vector used for estimating the spectral characteristics of an arbitrary subject, and the weighting coefficient related to the dedicated base vector is used only for the subject to be identified. And the output signal is calculated from the corrected spectral characteristic. Therefore, the spectral characteristics of the subject on the entire screen are obtained from the general-purpose base vector and the dedicated base vector, and the output signal is calculated. Therefore, the continuity between the identification target and other areas is maintained, and an output signal with high visibility can be obtained. . In addition, since the identification target is corrected using a dedicated basis vector, an estimation error of spectral characteristics can be suppressed, and high-precision identification is possible.
- the calculation unit 112 and the second calculation unit 131 may all use dedicated basis vectors, use at least the dedicated basis vectors of the subject to be identified, and use general-purpose basis vectors for others. May be. Furthermore, as described above, the total number of basis vectors is not limited to three. Further, the calculation unit 112 and the second calculation unit 131 may be integrated into one calculation unit. Of course, the present invention can be applied to both moving images and still images.
- the spectral characteristics of the subject on the entire screen are calculated from the dedicated base vectors of the subject to be identified and the subject not to be identified, and the output signal is calculated, so that the continuity between the subject to be identified and the other region is maintained and the visual recognition is performed. A good output signal can be obtained.
- the identification target is corrected using the dedicated base vector of the identification target subject, the estimation error of the spectral characteristics can be suppressed and high-precision identification is possible.
- the video signal from the CCD 101 is converted into raw data form that has not been processed, and the control unit 118 outputs imaging conditions such as a subject to be identified, a color imaging system, and illumination light as header information.
- imaging conditions such as a subject to be identified, a color imaging system, and illumination light.
- a configuration in which header information is input to a computer (not shown) and processed by software is also possible.
- FIG. 24 is a diagram showing a flowchart relating to software processing of signal processing by a computer (not shown). The same reference numerals are assigned to the same processing steps as those in the signal processing flowchart in the first embodiment shown in FIG.
- the computer first inputs a video signal and header information regarding imaging conditions such as an identification target, a subject, a color imaging system, and illumination light (step S101).
- a video signal from a Bayer type single-plate CCD including three color filters 121R, 121G, and 121B is processed.
- a plurality of dedicated base vectors and a plurality of general-purpose base vectors are input (step S102), and spectral luminance characteristics of a plurality of light sources and spectral sensitivity characteristics of a plurality of color imaging systems are input (step S103).
- the plurality of basis vectors and the plurality of spectral characteristics are input by reading from a recording medium included in the computer or a detachable recording medium or by reading through a network.
- a predetermined coefficient of the inverse matrix M ⁇ 1 of the system matrix M used for calculating the weight coefficient regarding the dedicated basis vector (O1 ( ⁇ )), that is, the element m 11 , m 12 and m 13 are calculated (step S104). Further, the coefficient of the inverse matrix M ⁇ 1 of the system matrix M used for calculating the weighting coefficient for the general-purpose base vectors (O2 ( ⁇ ), O3 ( ⁇ )) by the second calculation process described later in detail, that is, the element calculating the m 21, m 22, m 23 , m 31, m 32, m 33 ( step S113).
- step S105 frame signals are sequentially extracted from the input video signal (step S105), and a three-plate frame signal is generated by a known interpolation process (step S106). Then, signal processing such as known gradation processing and enhancement processing is performed on the frame signal (step S107).
- step S108 the weighting coefficient (w1 ij ) related to the dedicated basis vector (O1 ( ⁇ )) is calculated as shown in the above equation (3) (step S108).
- step S109 a frame signal relating to the presence of the identification target is generated (step S109).
- the weighting coefficients (w2 ij , w3 ij ) relating to the general basis vectors (O2 ( ⁇ ), O3 ( ⁇ )) are calculated as shown in the above equations (4) and (5) (step S114), By normalizing the calculated weighting factors (w2 ij , w3 ij ), a frame signal relating to a region where the subject to be identified does not exist, that is, a subject region other than the subject to be identified is generated (step S115). Then, pseudo colorization processing is performed from the frame signal relating to the presence of the identification target generated in step S109 and the frame signal relating to the existence region of the subject other than the identification target generated in step S115 (step S116).
- step S110 either the normal frame signal from step S107 or the pseudo-colored frame signal from step S116 is switched (step S110), and the frame signal is connected to the computer (not shown) or the like. (Step S111). Thereafter, it is determined whether or not all the frame signals have been completed (step S112). If not completed, the process returns to step S105, and if completed, the process ends.
- the second calculation process in step S113 is performed as shown in FIG. First, among the plurality of dedicated basis vectors and the plurality of general-purpose basis vectors input in step S102, the spectral luminance characteristics of the plurality of light sources and the spectral sensitivity characteristics of the plurality of color imaging systems input in step S103, Data to be used is selected (step S211). This is based on the subject to be identified in the header information input in step S101, the color imaging system, the imaging conditions such as illumination light, and the like. A spectral sensitivity characteristic is selected. For example, a dedicated basis vector (O1 ( ⁇ )) and general-purpose basis vectors (O2 ( ⁇ ), O3 ( ⁇ )) as shown in FIG. 7, a spectral luminance characteristic of a light source as shown in FIG. 5, as shown in FIG. Select spectral sensitivity characteristics of a color imaging system.
- step S212 the system matrix M shown in the above equation (1) is calculated (step S212), and the inverse matrix M ⁇ 1 of the system matrix M is calculated (step S213).
- elements m 21 , m 22 , m 23 , m 31 , and m 2 of the inverse matrix M ⁇ 1 required for calculating the weighting coefficients (w 2 ij , w 3 ij ) of the general basis vectors (O 2 ( ⁇ ), O 3 ( ⁇ )) m 32 and m 33 are selected as the predetermined coefficients (step S214), and elements m 21 , m 22 , m 23 , m 31 , m 32 and m 33 of the selected inverse matrix M ⁇ 1 are output (step S214).
- step S214 the predetermined coefficients
- the case where one kind of dedicated basis vector and two kinds of general-purpose basis vectors are used has been described as an example. However, all three kinds may use a dedicated basis vector, or two kinds of dedicated basis vectors. A kind of general-purpose basis vector may be used. Further, as described above, the invention is not limited to three types. Also in the calculation process, as long as at least the dedicated base vector of the subject to be identified is used, the other may be a dedicated base vector or a general-purpose base vector.
- calculation process and the second calculation process are basically the same, they are integrated to calculate all the weight coefficients related to the dedicated base vector and the general-purpose base vector by one calculation process. Of course, it is also good.
- the present invention can be applied to both moving images and still images.
- the microscope to which the signal processing system according to the third embodiment of the present invention is applied includes a calculation unit 112, a basis vector ROM 114, and a system spectral characteristic ROM 115 from the configuration of the first embodiment shown in FIG.
- the configuration is omitted, and a correlation coefficient calculation unit 137 and a derivation coefficient ROM 138 are added.
- the basic configuration is the same as that of the first embodiment, and the same name and reference number are assigned to the same configuration. Only the different parts will be described below.
- the video signal from the CCD 101 of the microscope is amplified by the amplification unit 105 and converted into a digital signal by the A / D converter 106.
- Illumination light from the illumination light source 103 is guided to the objective stage of the microscope via the illumination lens system 102.
- the interpolation unit 108 is connected to the signal processing unit 111 and the correlation coefficient calculation unit 137.
- the derived coefficient ROM 138 is connected to the correlation coefficient calculation unit 137.
- the correlation coefficient calculation unit 137 is connected to the normalization unit 116.
- the control unit 118 is bidirectionally connected to the correlation coefficient calculation unit 137.
- the signal flow will be described.
- a shutter button (not shown) in the external I / F unit 119 is half-pressed to enter the pre-imaging mode.
- the CCD 101 captures a subject image formed on the CCD 101 via the imaging lens system 100 and outputs a video signal as an analog signal.
- the analog signal is amplified by a predetermined amount by the amplifying unit 105, converted into a digital signal by the A / D converter 106, and transferred to the buffer 107.
- the video signal in the buffer 107 is transferred to the WB unit 109 and the photometric evaluation unit 110 under the control of the control unit 118. Similar to the first embodiment, the WB unit 109 performs white balance processing, and the photometric evaluation unit 110 performs exposure control.
- the main imaging is performed by fully pressing the shutter button in the external I / F unit 119, and the video signal is transferred to the buffer 107 as in the pre-imaging.
- the video signal in the buffer 107 is transferred to the interpolation unit 108.
- the interpolation unit 108 reads a single-plate video signal from the buffer 107, and generates a three-plate video signal by a known interpolation process.
- the generated three-plate video signal is transferred to the signal processing unit 111 and the correlation coefficient calculation unit 137.
- the signal processing unit 111 performs known gradation processing and enhancement processing on the video signal transferred from the interpolation unit 108 based on the control of the control unit 118, and transfers the processed video signal to the switching unit 113.
- the derivation coefficient ROM 138 stores a derivation coefficient for deriving the correlation between the spectral characteristics of the subject and the video signal for each subject to be identified. This derivation coefficient is calculated and stored in advance based on the known spectral characteristics of the subject, the spectral characteristics related to the color imaging system used for imaging the subject, and the spectral characteristics related to the illumination light used when imaging the subject. In this way, the derivation coefficient ROM 138 functions as, for example, a derivation coefficient acquisition unit that acquires a derivation coefficient indicating the correlation between the known spectral characteristics of the subject and the video signal.
- the correlation coefficient calculation unit 137 obtains the derivation coefficient from the derivation coefficient ROM 138 based on the control of the control unit 118 according to the selection of the subject to be identified in the imaging condition set via the external I / F unit 119. Read selectively. Thereafter, for the video signal transferred from the interpolation unit 108, the correlation coefficient between the spectral characteristic of the subject to be identified and the video signal is calculated using the read derivation coefficient.
- the correlation coefficient calculated by the correlation coefficient calculation unit 137 takes a value proportional to the presence of the subject to be identified, is transferred to the normalization unit 116, and is signal level of the video signal (for example, signal If the level is 8 bits, normalization processing is performed so as to match “0” to “255”). The correlation coefficient after the normalization processing is transferred to the switching unit 113 as a video signal.
- the switching unit 113 selects either a normal video signal transferred from the signal processing unit 111 or a video signal related to the presence of an identification target transferred from the normalization unit 116, for example,
- the data is transferred to the output unit 117 which is a display monitor and displayed.
- the video signal from the normalization unit 116 is output as a black and white signal.
- the output unit 117 is not limited to a display monitor, and a form in which a video signal is recorded and stored in a recording medium such as a hard disk or a memory card is also possible.
- the switching unit 113, the normalization unit 116, and the output unit 117 calculate an output signal as an identification result of a subject to be identified based on the correlation coefficient calculated by the correlation coefficient calculation unit 137, for example. It functions as an output signal calculation unit.
- the correlation coefficient calculation unit 137 includes a coefficient selection unit 227 and a multiplication / addition unit 226 as shown in FIG.
- the derived coefficient ROM 138 is connected to the multiplier / adder 226 via the coefficient selector 227.
- the interpolation unit 108 is connected to the multiplication / addition unit 226.
- the multiplication / addition unit 226 is connected to the normalization unit 116.
- the control unit 118 is bidirectionally connected to the coefficient selection unit 227 and the multiplication / addition unit 226.
- the coefficient selection unit 227 receives information on the subject to be identified in the imaging condition set via the external I / F unit 119 from the control unit 118, and based on the information, the coefficient to be identified from the derived coefficient ROM 138.
- the derived coefficient for deriving the correlation between the spectral characteristics of the image and the video signal is read.
- each element of the inverse matrix M ⁇ 1 of the system matrix M shown in the above equation (2) is recorded as a derived coefficient. This is based on the premise that the spectral characteristics relating to the color imaging system used for imaging the subject and the spectral characteristics relating to the illumination light used when imaging the subject by the color imaging system are fixed. In this case, the calculation process shown in the above equations (1) and (2) can be omitted, and the inverse matrix M ⁇ 1 of the system matrix M finally obtained may be recorded.
- the multiplication / addition unit 226 reads the derived coefficient from the coefficient selection unit 227 and the video signal from the interpolation unit 108 in units of pixels based on the control of the control unit 118. Thereafter, a weighting coefficient is obtained based on the above equation (3).
- the weight coefficient is a correlation coefficient that represents the correlation between the known spectral characteristics of the subject to be identified and the video signal. The correlation coefficients are sequentially transferred to the normalization unit 116.
- a value proportional to the presence of the subject to be identified is obtained from the derived coefficient based on the known spectral property of the subject to be identified whose spectral characteristics are known.
- a correlation coefficient between the known spectral characteristic of the subject to be identified and the video signal is obtained, and an output signal as a discrimination result of the subject to be identified whose spectral characteristic is known based on the correlation coefficient. calculate.
- the correlation coefficient having a value proportional to the existence of the subject to be identified is calculated. Therefore, it is not necessary to perform signal processing including an error as in the case of approximation based on the conventional least square method. Therefore, the occurrence of error due to signal processing is small, and highly reliable identification is possible.
- the output signal is obtained by normalizing the correlation coefficient related to the derived coefficient, a highly accurate output signal can be obtained regarding the presence of the identification target. Furthermore, since the output signal is obtained only by normalization processing, the processing speed can be increased and the cost can be reduced.
- the configuration is such that still image processing is performed by a microscope, but it is not necessary to be limited to such a configuration. If the spectral characteristics relating to the color imaging system and the spectral characteristics relating to the illumination light used for imaging the subject are fixed, the present invention is also applied to moving image processing such as an endoscope as in the first and second embodiments described above. It is possible.
- the video signal and the accompanying information are obtained from the recording medium in which the video signal captured by the separate imaging unit is recorded in the header portion with the accompanying information regarding the imaging conditions such as the subject to be identified in the raw raw data format. It is also possible to obtain and process. Further, as described in the second modification of the first embodiment described above, the present invention can also be applied to an imaging system including four primary color filters 122 or color difference line sequential complementary color filters 123, and a two-plate and three-plate CCD.
- the switching unit is configured to switch either the normal video signal transferred from the signal processing unit 111 as the output of the video signal or the video signal related to the presence of the identification target transferred from the normalization unit 116. 113 is selected and output.
- the switching unit is configured to switch either the normal video signal transferred from the signal processing unit 111 as the output of the video signal or the video signal related to the presence of the identification target transferred from the normalization unit 116. 113 is selected and output.
- the normalization unit 116 may be omitted from the configuration shown in FIG. 26, and the switching unit 113 may be replaced with the emphasizing unit 124.
- the basic configuration is the same as in FIG. 26, and the same name and reference number are assigned to the same configuration. Only different parts will be described below.
- the signal processing unit 111 and the correlation coefficient calculation unit 137 are connected to the enhancement unit 124.
- the emphasis unit 124 is connected to the output unit 117.
- the control unit 118 is bidirectionally connected to the emphasizing unit 124.
- a normal video signal is transferred from the signal processing unit 111 to the enhancement unit 124, and a correlation coefficient between a known spectral characteristic of the subject to be identified and the video signal is transferred from the correlation coefficient calculation unit 137.
- the enhancement unit 124 performs enhancement processing based on the correlation coefficient transferred from the correlation coefficient calculation unit 137 on the video signal transferred from the signal processing unit 111 based on the control of the control unit 118. Do.
- the enhancement processing known edge enhancement processing or saturation enhancement processing is assumed, and these enhancement amounts are performed in a form proportional to the correlation coefficient.
- the video signal after the enhancement process is transferred to the output unit 117.
- the enhancement unit 124 and the output unit 117 are, for example, as output signal calculation units that calculate an output signal as an identification result of a subject to be identified based on the correlation coefficient calculated by the correlation coefficient calculation unit 137. Function.
- the configuration shown in FIG. 26 may be configured such that the switching unit 113 is replaced with a combining unit 125.
- the basic configuration is the same as in FIG. 26, and the same name and reference number are assigned to the same configuration. Only different parts will be described below.
- the signal processing unit 111 and the normalization unit 116 are connected to the synthesis unit 125.
- the combining unit 125 is connected to the output unit 117.
- the control unit 118 is bidirectionally connected to the synthesis unit 125.
- a normal video signal transferred from the signal processing unit 111 is transferred to the combining unit 125, and a video signal related to the presence of the identification target is transferred from the normalizing unit 116.
- the synthesizing unit 125 synthesizes a video signal related to the presence of the identification target transferred from the normalization unit 116 with respect to the video signal transferred from the signal processing unit 111 based on the control of the control unit 118. To do. As the synthesis process, a process such as a known superimpose is assumed. The video signal after the synthesis process is transferred to the output unit 117. As described above, the normalization unit 116, the synthesis unit 125, and the output unit 117 calculate the output signal as the identification result of the subject to be identified based on the correlation coefficient calculated by the correlation coefficient calculation unit 137, for example. It functions as an output signal calculation unit.
- a highly accurate output signal can be obtained regarding the existence area of the subject to be identified.
- synthesis processing is performed between video signals that have undergone normal processing, a video signal that has undergone normal processing is output even in a region where there is no subject to be identified, that is, a region where a subject other than the identification target exists. As a result, the entire video signal can be easily recognized, and the operability for the user is improved.
- the synthesizing unit 125 displays a window on a part of the screen and displays a video signal from the signal processing unit 111 or a video signal from the normalization unit 116 as a sub-screen sub-screen.
- a composition process such as a picture may be performed. In this case, it is preferable to be able to select which is the parent screen and which is the child screen by an instruction via the external I / F unit 119.
- the switching unit 113 may be omitted and the second output unit 126 may be added from the configuration shown in FIG.
- the basic configuration is the same as in FIG. 26, and the same name and reference number are assigned to the same configuration. Only different parts will be described below.
- the signal processing unit 111 is connected to the output unit 117.
- the normalization unit 116 is connected to the second output unit 126.
- the control unit 118 is bidirectionally connected to the second output unit 126.
- the signal processing unit 111 transfers a normal video signal to the output unit 117 based on the control of the control unit 118.
- the output unit 117 displays a normal video signal.
- the normalization unit 116 transfers a video signal related to the presence of the identification target to the second output unit 126. In the second output unit 126, a video signal relating to the presence of the identification target is displayed.
- the normalization unit 116, the output unit 117, and the second output unit 126 for example, output the output signal as the identification result of the subject to be identified based on the correlation coefficient calculated by the correlation coefficient calculation unit 137. It functions as an output signal calculation unit for calculating.
- the video signal related to the identification target by normalizing the correlation coefficient related to the derived coefficient, a highly accurate output signal can be obtained regarding the presence of the identification target.
- the video signal that has undergone normal processing is also output independently, it is easy to recognize the entire video signal, and the operability for the user is improved.
- the video signal from the CCD 101 is processed as raw data in an unprocessed raw data format, and the imaging condition of the subject to be identified is output as header information from the control unit 118, and the video signal and header information are output to a computer (not shown). It is possible to adopt a configuration in which it is input and processed by software.
- FIG. 31 is a diagram showing a flowchart relating to software processing of signal processing by a computer (not shown). The same reference numerals are assigned to the same processing steps as those in the signal processing flowchart in the first embodiment shown in FIG.
- the computer first inputs a video signal and header information related to imaging conditions such as a subject to be identified (step S101) and inputs a plurality of derivation coefficients (step S117).
- the derivation coefficient is input by reading from a recording medium provided in the computer or a detachable recording medium or by reading through a network.
- a three-plate video signal is generated from the input video signal by known interpolation processing (step S106), and signal processing such as known gradation processing and enhancement processing is performed on the video signal (step S107). ).
- step S109 the correlation between the known spectral characteristics of the subject to be identified and the video signal based on the input derivation coefficient is performed by a correlation coefficient calculation process as will be described in detail later.
- the number of relations is calculated (step S118).
- step S109 the correlation coefficient relating to the presence of the identification target is generated (step S109).
- step S110 either the normal video signal from step S107 or the video signal relating to the presence of the subject to be identified from step S109 is switched (step S110), and the video signal is connected to the computer (not shown). (Step S111), and the process is terminated.
- step S118 The correlation coefficient calculation process in step S118 is performed as shown in FIG. First, a derivation coefficient is selected from a plurality of derivation coefficients input in step S117 based on subject information to be identified in the header information input in step S101 (step S226). Then, as shown in the above equation (3), a correlation coefficient is calculated based on the derived coefficient (step S227), and the calculated correlation coefficient is output (step S228).
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Optics & Photonics (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Dentistry (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Endoscopes (AREA)
- Instruments For Viewing The Inside Of Hollow Bodies (AREA)
- Color Television Image Signal Generators (AREA)
- Processing Of Color Television Signals (AREA)
- Microscoopes, Condenser (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN200980119299.XA CN102046062B (zh) | 2008-05-28 | 2009-05-25 | 信号处理系统和信号处理方法 |
| EP09754665.9A EP2286713B1 (en) | 2008-05-28 | 2009-05-25 | Signal processing system and signal processing program |
| US12/953,740 US8805061B2 (en) | 2008-05-28 | 2010-11-24 | Signal processing system and signal processing program |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2008139930A JP5389380B2 (ja) | 2008-05-28 | 2008-05-28 | 信号処理システム及び信号処理プログラム |
| JP2008-139930 | 2008-05-28 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/953,740 Continuation US8805061B2 (en) | 2008-05-28 | 2010-11-24 | Signal processing system and signal processing program |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2009145157A1 true WO2009145157A1 (ja) | 2009-12-03 |
Family
ID=41377030
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2009/059551 Ceased WO2009145157A1 (ja) | 2008-05-28 | 2009-05-25 | 信号処理システム及び信号処理プログラム |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US8805061B2 (enExample) |
| EP (2) | EP2286713B1 (enExample) |
| JP (1) | JP5389380B2 (enExample) |
| CN (1) | CN102046062B (enExample) |
| WO (1) | WO2009145157A1 (enExample) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2012143348A (ja) * | 2011-01-11 | 2012-08-02 | Fujifilm Corp | 分光計測システムおよび分光計測方法 |
| EP2474853A3 (en) * | 2011-01-11 | 2013-01-23 | Fujifilm Corporation | Electronic endoscope system |
| CN104116485A (zh) * | 2013-04-26 | 2014-10-29 | Hoya株式会社 | 损伤评估信息生成器及其方法 |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5389380B2 (ja) | 2008-05-28 | 2014-01-15 | オリンパス株式会社 | 信号処理システム及び信号処理プログラム |
| JP4717103B2 (ja) * | 2008-07-18 | 2011-07-06 | オリンパス株式会社 | 信号処理システム及び信号処理プログラム |
| JP5502812B2 (ja) * | 2011-07-14 | 2014-05-28 | 富士フイルム株式会社 | 生体情報取得システムおよび生体情報取得システムの作動方法 |
| JP5993184B2 (ja) * | 2012-04-04 | 2016-09-14 | オリンパス株式会社 | 蛍光観察装置および蛍光観察装置の作動方法 |
| DE102014215095A1 (de) * | 2014-07-31 | 2016-02-04 | Carl Zeiss Microscopy Gmbh | Verfahren zur Korrektur von beleuchtungsbedingten Abbildungsfehlern in einem modularen Digitalmikroskop, Digitalmikroskop und Datenverarbeitungsprogramm |
| JP6385465B2 (ja) * | 2015-01-21 | 2018-09-05 | オリンパス株式会社 | 内視鏡装置 |
| JP6620426B2 (ja) | 2015-05-28 | 2019-12-18 | 富士通株式会社 | 情報検出装置、情報検出システム、情報検出プログラム、及び情報検出方法 |
| CN106725263B (zh) * | 2016-12-15 | 2018-07-06 | 深圳开立生物医疗科技股份有限公司 | 应用于内窥镜系统的成像方法 |
| JP2018108173A (ja) * | 2016-12-28 | 2018-07-12 | ソニー株式会社 | 医療用画像処理装置、医療用画像処理方法、プログラム |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH06335451A (ja) * | 1993-03-19 | 1994-12-06 | Olympus Optical Co Ltd | 内視鏡用画像処理装置 |
| JP2003093336A (ja) | 2001-09-26 | 2003-04-02 | Toshiba Corp | 電子内視鏡装置 |
| JP2006314557A (ja) * | 2005-05-12 | 2006-11-24 | Olympus Medical Systems Corp | 生体観測装置 |
| JP2007111357A (ja) * | 2005-10-21 | 2007-05-10 | Olympus Medical Systems Corp | 生体撮像装置及び生体観測システム |
Family Cites Families (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0796005B2 (ja) | 1987-10-27 | 1995-10-18 | オリンパス光学工業株式会社 | 内視鏡装置 |
| JP3217343B2 (ja) * | 1989-03-23 | 2001-10-09 | オリンパス光学工業株式会社 | 画像処理装置 |
| US5550582A (en) | 1993-03-19 | 1996-08-27 | Olympus Optical Co., Ltd. | Endoscope-image processing apparatus for performing image processing of emphasis in endoscope image by pigment concentration distribution |
| US5995645A (en) | 1993-08-18 | 1999-11-30 | Applied Spectral Imaging Ltd. | Method of cancer cell detection |
| US6750964B2 (en) | 1999-08-06 | 2004-06-15 | Cambridge Research And Instrumentation, Inc. | Spectral imaging methods and systems |
| DE69941933D1 (de) * | 1999-11-18 | 2010-03-04 | Fujitsu Ltd | Verfahren zum erzeugen von farbumwandlungstabellen, gerät zum erzeugen von farbumwandlungstabellen, speichermedium auf welchem ein programm zum erzeugen von farbumwandlungstabellen gespeichert ist. |
| US6888963B2 (en) * | 2000-07-18 | 2005-05-03 | Matsushita Electric Industrial Co., Ltd. | Image processing apparatus and image processing method |
| US7864379B2 (en) | 2001-03-19 | 2011-01-04 | Dmetrix, Inc. | Multi-spectral whole-slide scanner |
| US7136518B2 (en) | 2003-04-18 | 2006-11-14 | Medispectra, Inc. | Methods and apparatus for displaying diagnostic data |
| CA2491703A1 (en) * | 2002-07-09 | 2004-01-15 | Medispectra, Inc. | Methods and apparatus for characterization of tissue samples |
| JP3767541B2 (ja) * | 2002-11-12 | 2006-04-19 | ソニー株式会社 | 光源推定装置、光源推定方法、撮像装置および画像処理方法 |
| US20040202356A1 (en) | 2003-04-10 | 2004-10-14 | Stookey George K. | Optical detection of dental caries |
| US7321791B2 (en) | 2003-09-23 | 2008-01-22 | Cambridge Research And Instrumentation, Inc. | Spectral imaging of deep tissue |
| JP4021414B2 (ja) | 2003-11-26 | 2007-12-12 | オリンパス株式会社 | スペクトラルデコンボリューション法及びスペクトラルブラインドデコンボリューション法 |
| CN101163438B (zh) * | 2005-05-11 | 2011-09-14 | 奥林巴斯医疗株式会社 | 生物体观测装置和用于生物体观测装置的信号处理装置 |
| JP4409523B2 (ja) | 2005-05-12 | 2010-02-03 | オリンパスメディカルシステムズ株式会社 | 生体観測装置 |
| EP1880657B1 (en) * | 2005-05-12 | 2017-01-18 | Olympus Corporation | Biological observation apparatus |
| JP4504324B2 (ja) | 2005-05-13 | 2010-07-14 | オリンパスメディカルシステムズ株式会社 | 生体観測装置 |
| US7526116B2 (en) | 2006-01-19 | 2009-04-28 | Luigi Armogida | Automated microscopic sperm identification |
| US7916943B2 (en) | 2006-06-02 | 2011-03-29 | Seiko Epson Corporation | Image determining apparatus, image determining method, image enhancement apparatus, and image enhancement method |
| JP4959237B2 (ja) * | 2006-06-22 | 2012-06-20 | オリンパス株式会社 | 撮像システム及び撮像プログラム |
| WO2008019299A2 (en) | 2006-08-04 | 2008-02-14 | Ikonisys, Inc. | Image processing method for a microscope system |
| JP4931199B2 (ja) * | 2006-09-29 | 2012-05-16 | 富士フイルム株式会社 | 電子内視鏡装置 |
| JP4964568B2 (ja) | 2006-11-24 | 2012-07-04 | 浜松ホトニクス株式会社 | 蛍光検出装置、蛍光検出方法および蛍光検出プログラム |
| JP2008161550A (ja) | 2006-12-28 | 2008-07-17 | Olympus Corp | 内視鏡システム |
| JP5389380B2 (ja) | 2008-05-28 | 2014-01-15 | オリンパス株式会社 | 信号処理システム及び信号処理プログラム |
| JP4717103B2 (ja) * | 2008-07-18 | 2011-07-06 | オリンパス株式会社 | 信号処理システム及び信号処理プログラム |
-
2008
- 2008-05-28 JP JP2008139930A patent/JP5389380B2/ja not_active Expired - Fee Related
-
2009
- 2009-05-25 EP EP09754665.9A patent/EP2286713B1/en not_active Not-in-force
- 2009-05-25 CN CN200980119299.XA patent/CN102046062B/zh active Active
- 2009-05-25 WO PCT/JP2009/059551 patent/WO2009145157A1/ja not_active Ceased
- 2009-05-25 EP EP15176086.5A patent/EP2949261A1/en not_active Ceased
-
2010
- 2010-11-24 US US12/953,740 patent/US8805061B2/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH06335451A (ja) * | 1993-03-19 | 1994-12-06 | Olympus Optical Co Ltd | 内視鏡用画像処理装置 |
| JP2003093336A (ja) | 2001-09-26 | 2003-04-02 | Toshiba Corp | 電子内視鏡装置 |
| JP2006314557A (ja) * | 2005-05-12 | 2006-11-24 | Olympus Medical Systems Corp | 生体観測装置 |
| JP2007111357A (ja) * | 2005-10-21 | 2007-05-10 | Olympus Medical Systems Corp | 生体撮像装置及び生体観測システム |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP2286713A4 |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2012143348A (ja) * | 2011-01-11 | 2012-08-02 | Fujifilm Corp | 分光計測システムおよび分光計測方法 |
| EP2474853A3 (en) * | 2011-01-11 | 2013-01-23 | Fujifilm Corporation | Electronic endoscope system |
| US9113787B2 (en) | 2011-01-11 | 2015-08-25 | Fujifilm Corporation | Electronic endoscope system |
| CN104116485A (zh) * | 2013-04-26 | 2014-10-29 | Hoya株式会社 | 损伤评估信息生成器及其方法 |
| US9468356B2 (en) | 2013-04-26 | 2016-10-18 | Hoya Corporation | Lesion evaluation information generator, and method and computer readable medium therefor |
Also Published As
| Publication number | Publication date |
|---|---|
| US8805061B2 (en) | 2014-08-12 |
| CN102046062B (zh) | 2014-01-29 |
| JP5389380B2 (ja) | 2014-01-15 |
| EP2286713A1 (en) | 2011-02-23 |
| US20110069868A1 (en) | 2011-03-24 |
| CN102046062A (zh) | 2011-05-04 |
| JP2009285084A (ja) | 2009-12-10 |
| EP2286713B1 (en) | 2015-11-11 |
| EP2949261A1 (en) | 2015-12-02 |
| EP2286713A4 (en) | 2012-06-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5389380B2 (ja) | 信号処理システム及び信号処理プログラム | |
| US8532376B2 (en) | Signal processing system and computer readable medium for recording signal processing program | |
| JP5250342B2 (ja) | 画像処理装置およびプログラム | |
| CN101179742B (zh) | 摄像装置和图像信号处理装置 | |
| US10918270B2 (en) | Endoscope system and evaluation value calculation device | |
| EP3446617A1 (en) | Endoscope system, image processing device, and image processing device operation method | |
| JP4895834B2 (ja) | 画像処理装置 | |
| EP2505121B1 (en) | Endoscope apparatus | |
| WO2011096279A1 (ja) | 画像処理装置、内視鏡システム、プログラム及び画像処理方法 | |
| JP5087529B2 (ja) | 識別処理装置、識別処理プログラム及び識別処理方法 | |
| CN101322398B (zh) | 照相机系统、曝光量调节方法 | |
| US8238524B2 (en) | Microscope color image pickup apparatus, microscope color image pickup program product, and microscope color image pickup method | |
| JP2007208413A (ja) | 色補正装置および色補正方法ならびに色補正プログラム | |
| JP2016015995A (ja) | 電子内視鏡システム及び電子内視鏡用プロセッサ | |
| CN101473658B (zh) | 摄像系统和摄像方法 | |
| CN107005683A (zh) | 内窥镜装置 | |
| CN112469324A (zh) | 内窥镜系统 | |
| US20100177181A1 (en) | Endoscope image processing method and apparatus, and endoscope system using the same | |
| US10702127B2 (en) | Endoscope system and evaluation value calculation device | |
| JP2010200883A (ja) | 内視鏡画像処理装置および方法ならびにプログラム | |
| JP2010213746A (ja) | 内視鏡画像処理装置および方法ならびにプログラム | |
| JP2010279526A (ja) | 内視鏡画像処理装置および方法ならびにプログラム | |
| JP2007278950A (ja) | マルチバンド撮像装置およびカラーフィルタの特性設定方法 | |
| JP2008093225A (ja) | 内視鏡システム及び内視鏡システムにおける画像処理方法 | |
| US20070126894A1 (en) | Method for calculating color correction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 200980119299.X Country of ref document: CN |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09754665 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2009754665 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |