WO2019181064A1 - Endoscope - Google Patents

Endoscope Download PDF

Info

Publication number
WO2019181064A1
WO2019181064A1 PCT/JP2018/042859 JP2018042859W WO2019181064A1 WO 2019181064 A1 WO2019181064 A1 WO 2019181064A1 JP 2018042859 W JP2018042859 W JP 2018042859W WO 2019181064 A1 WO2019181064 A1 WO 2019181064A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
pixel data
endoscope
parity
pixel
Prior art date
Application number
PCT/JP2018/042859
Other languages
French (fr)
Japanese (ja)
Inventor
裕 仲摩
紗依里 齋藤
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Publication of WO2019181064A1 publication Critical patent/WO2019181064A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • G02B23/24Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an endoscope, and more particularly, to an endoscope including a solid-state image sensor.
  • An endoscope system including an endoscope that captures an object inside a subject and an image processing device that generates an observation image of the object captured by the endoscope is widely used in the medical field, the industrial field, and the like. It is used.
  • CMOS image sensor As an endoscope in such an endoscope system, for example, a CMOS image sensor is adopted as a solid-state imaging device, and an imaging signal (video data) output from the CMOS image sensor is sent to an image processing apparatus at a subsequent stage. Transmitting endoscopes are widely known.
  • the above-described imaging device such as a CMOS image sensor is generally driven by receiving a predetermined power supply voltage and a control signal from an image processing device via a cable disposed in an insertion portion of an endoscope and in a universal cord. It has come to be.
  • an image pickup signal (video data) output from the image pickup device is also transferred via this cable toward a connector portion disposed at the base end portion of the insertion portion and further to the image processing apparatus.
  • an endoscope when an endoscope is used, treatment with an electric knife or the like may be used in the vicinity. Due to the nature of the electric knife, it is inevitable that the electric knife generates high-intensity noise, and disturbance noise may be applied to the cable in the endoscope. That is, when an image signal (video data) output from the image sensor is transferred, there is a possibility that a burst error in which many errors are concentrated in a short time due to disturbance noise from an electric knife or the like may occur.
  • ECC Error Check and Correct
  • the present invention has been made in view of the above-described circumstances, and an object thereof is to provide an endoscope that reliably transfers pixel data even in a situation where a strong disturbance noise can be applied.
  • An endoscope includes an imaging device disposed at a distal end portion of an insertion portion that is inserted into a subject, and a base that is disposed on a proximal side from the imaging device and connected to the imaging device.
  • An end-side processing unit a sensor unit that is provided in the image sensor; images a subject, performs photoelectric conversion; and outputs predetermined pixel data; and an order of the pixel data provided in the image sensor and read from the sensor unit
  • a read control unit for instructing the image data, a parity adding unit for adding a predetermined parity to the pixel data output by controlling the read order of the read control unit, and the base end side processing unit.
  • error detection is performed on the pixel data to which the parity is added and error detection presence / absence information is output, and the error detection is performed.
  • An error detection unit that outputs the pixel data of the pixel, a pixel rearrangement unit that is provided in the base end side processing unit and rearranges the pixel data after error detection output from the error detection unit, and the base end side processing unit
  • An interpolation processing unit that inputs the pixel data rearranged by the pixel rearrangement unit and performs predetermined interpolation processing based on the error detection presence / absence information output from the error detection unit; Are provided.
  • FIG. 1 is a diagram illustrating a configuration of an endoscope system including an endoscope according to a first embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating an electrical configuration of the endoscope system including the endoscope according to the first embodiment.
  • FIG. 3 is an explanatory diagram illustrating an example in which pixel data output from the image sensor is divided into an odd number and an even number in the endoscope of the first embodiment.
  • FIG. 4 is an explanatory diagram showing an example of parity added to pixel data output from the image sensor in the endoscope of the first embodiment.
  • FIG. 1 is a diagram illustrating a configuration of an endoscope system including an endoscope according to a first embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating an electrical configuration of the endoscope system including the endoscope according to the first embodiment.
  • FIG. 3 is an explanatory diagram illustrating an example in which pixel data output from the image sensor is divided into an odd number and an even number in the end
  • FIG. 5 is an explanatory diagram illustrating an example of parity deletion and pixel data rearrangement after performing a parity check on pixel data to which parity has been added in the endoscope of the first embodiment.
  • FIG. 6 shows pixel data after parity check and parity removal and rearrangement after the parity check in the endoscope of the first embodiment, and shows the relationship between the pixel data in the block in which an error has occurred and the adjacent pixel data. It is explanatory drawing shown.
  • FIG. 7 shows pixel data after parity removal and rearrangement after parity check in the endoscope of the first embodiment, before and after interpolation of pixel data in a block in which an error has occurred. It is explanatory drawing shown.
  • FIG. 6 shows pixel data after parity check and parity removal and rearrangement after the parity check in the endoscope of the first embodiment, before and after interpolation of pixel data in a block in which an error has occurred. It is explanatory drawing shown.
  • FIG. 8 is an explanatory diagram illustrating an example of rearrangement of pixel data according to an instruction from the read control unit in the endoscope according to the first embodiment.
  • FIG. 9 is an explanatory diagram illustrating another example of rearrangement of pixel data according to an instruction from the read control unit in the endoscope according to the first embodiment.
  • FIG. 10 is an explanatory diagram illustrating another example of rearrangement of pixel data according to an instruction from the read control unit in the endoscope according to the first embodiment.
  • FIG. 11 is an explanatory diagram showing table data prepared in advance in another example of rearrangement of pixel data in accordance with an instruction from the read control unit in the endoscope of the first embodiment.
  • FIG. 12 is an explanatory diagram showing another example of rearrangement of pixel data according to an instruction from the readout control unit in the endoscope according to the first embodiment, using the table data shown in FIG. is there.
  • FIG. 13 is a timing chart showing the internal operation of the imaging device when the imaging device receives an external synchronization timing control signal in the endoscope according to the second embodiment of the present invention.
  • FIG. 14 is a block diagram illustrating a configuration of an endoscope system including an endoscope according to the third embodiment of the present invention.
  • FIG. 15 is a block diagram showing a configuration of an endoscope system including an endoscope according to the fourth embodiment of the present invention.
  • FIG. 1 is a diagram illustrating a configuration of an endoscope system including an endoscope according to a first embodiment of the present invention
  • FIG. 2 is a diagram of an endoscope system including an endoscope according to the first embodiment. It is a block diagram which shows an electric structure.
  • an endoscope system 1 having an endoscope according to the first embodiment is connected to an endoscope 2 that observes and images a subject, and the endoscope 2.
  • a video processor 3 that inputs the imaging signal and performs predetermined image processing; a light source device 4 that supplies illumination light for illuminating the subject; and a monitor device 5 that displays an observation image corresponding to the imaging signal.
  • the endoscope 2 includes an elongated insertion portion 6 that is inserted into a body cavity or the like of a subject, and an endoscope operation portion 10 that is disposed on the proximal end side of the insertion portion 6 and is operated by being grasped by an operator. And a universal cord 11 provided with one end portion so as to extend from the side portion of the endoscope operation unit 10.
  • the insertion portion 6 includes a rigid distal end portion 7 provided on the distal end side, a bendable bending portion 8 provided at the rear end of the distal end portion 7, and a long and flexible portion provided at the rear end of the bending portion 8. And a flexible tube portion 9 having flexibility.
  • a connector portion 12 is provided on the base end side of the universal cord 11 (that is, the base end side with respect to the image sensor), and the connector portion 12 is connected to the light source device 4. That is, a base (not shown) serving as a connection end of a fluid conduit projecting from the tip of the connector portion 12 and a light guide base (not shown) serving as an illumination light supply end are attached to and detached from the light source device 4. It is designed to be connected freely.
  • connection cable 13 is connected to the electrical contact portion provided on the side surface of the connector portion 12.
  • the connection cable 13 includes, for example, a signal line for transmitting an imaging signal from a solid-state imaging device (CMOS image sensor) 21 (see FIG. 2) in the endoscope 2 and a solid-state imaging device (hereinafter, both imaging devices).
  • CMOS image sensor solid-state imaging device
  • a control signal line and a power supply line for driving are described, and the connector at the other end is connected to the video processor 3.
  • the endoscope 2 includes an objective optical system (not shown) including a lens for receiving a subject image disposed at the distal end portion 7 of the insertion portion 6, and an objective. And an image sensor (CMOS image sensor) 21 disposed on an image forming surface in the optical system.
  • CMOS image sensor image sensor
  • the endoscope 2 extends from the image sensor 21, and passes through the insertion section 6 and the universal cord 11 (see FIG. 1) from the image sensor 21 to be connected to the connector section 12 (in FIG. 2, the connector section 12.
  • the cable 41 is provided to reach the FPGA 25).
  • the imaging element 21 is a solid-state imaging element configured by a CMOS image sensor in the present embodiment.
  • the image sensor 21 includes a sensor unit 22, a read control unit 23, and a parity adding unit 24 described below.
  • the sensor unit 22 receives a subject image and photoelectrically converts it, and then outputs predetermined pixel data.
  • the read control unit 23 instructs the order of pixel data read from the sensor unit 22.
  • the parity adding unit 24 adds a predetermined parity to the pixel data output with the reading control unit 23 controlling the reading order.
  • N is an integer of 2 or more
  • L mod N is a remainder when L is divided by N
  • the pixel data of the physical pixel array on the same line with the same L mod ⁇ N is collected to form a data group, and the pixel data is read so as to set the array of pixel data with the data group as the minimum unit.
  • An instruction is given to the sensor unit 22.
  • the physical pixel arrangement on the same line in the sensor unit 22 is rearranged in units of pixels under the control of the reading control unit 23.
  • the rearrangement of pixels is not limited to this. Other schemes are also conceivable. For example, it may be an example of rearrangement in line units.
  • the cable 41 extends from the image sensor 21 and is disposed from the image sensor 21 through the insertion section 6 and the universal cord 11 (see FIG. 1) to the connector section 12 (in FIG. 2, the connector section 12).
  • the cable 41 also includes a drive signal line for transmitting various drive signals (reference clock CLK, various synchronization signals, etc.), a power supply line for supplying a power supply voltage VDD for driving the image sensor 21, and the image signal. It is a cable that includes an imaging signal output line to be transmitted.
  • an FPGA 25 that is a proximal end processing unit connected to the image sensor 21 is disposed on the connector unit 12 via the cable 41 from the image sensor 21.
  • the FPGA 25 is configured by a so-called FPGA (Field Programmable Gate Array), adjusts various timings related to the image sensor 21 under the control of the video processor 3, and receives image data from the image sensor 21. After being input and subjected to predetermined processing, it is sent to an image processing unit in the video processor 3.
  • FPGA Field Programmable Gate Array
  • the FPGA 25 forms a parity check unit 26, a pixel rearrangement unit 27, and an interpolation processing unit 28, which will be described in detail below.
  • the parity check unit 26 performs error detection of the pixel data to which the parity is added based on the parity added by the parity addition unit 24 in the image sensor 21 and outputs error detection presence / absence information, and also The pixel data after detection is output.
  • the pixel rearrangement unit 27 rearranges the pixel data after parity check (error detection) output from the parity check unit 26 into the original array.
  • the interpolation processing unit 28 inputs the pixel data rearranged by the pixel rearrangement unit 27 and performs predetermined interpolation processing based on the error detection presence / absence information output from the parity check unit 26.
  • the endoscope system 1 of the present embodiment includes a video processor 3 that is connected to the endoscope 2 and that inputs the imaging signal and performs predetermined image processing.
  • FIG. 3 is an explanatory diagram illustrating an example in which pixel data output from the image sensor is divided into an odd number and an even number in the endoscope of the first embodiment.
  • FIG. 4 is an explanatory diagram showing an example of parity added to pixel data output from the image sensor in the endoscope of the first embodiment
  • FIG. 5 is an endoscope of the first embodiment. It is explanatory drawing which showed an example of the parity deletion and pixel data rearrangement after carrying out a parity check with respect to the pixel data to which the parity was added in the mirror.
  • the sensor unit 22 controls the readout control unit 23 to set the data of all 400 pixels as an odd number as shown in FIG. Divide even numbers and transfer.
  • the data of all 400 pixels are rearranged into odd numbers and even numbers from the sensor unit 22, and transferred by the parity adding unit 24.
  • the parity adding unit 24 adds a parity for every 100 pixels, for example, as shown in FIG.
  • the parity adding unit 24 divides the data of 400 pixels into four blocks for every 100 pixels (odd block A, odd block B, even block C, even block D), respectively. Two parities are added to the pixel group of each block and output toward the subsequent stage.
  • the pixel data group to which the parity is added which is output from the parity adding unit 24, is input via the cable 41 to the parity check unit 26 in the FPGA 25 which is the base-end processing unit.
  • the parity check unit 26 the pixel data group to which the above-described parity is added is input, a predetermined parity check is executed, the parity is deleted, and then output to the pixel rearrangement unit 27.
  • the parity check unit 26 transmits error detection presence / absence information and error position information to the interpolation processing unit 28.
  • the parity check unit 26 uses the pixel data relating to the block A (the block A has 100 pixels at this time, the data for all 100 pixels) as an interpolation candidate having an error, and the information is used as an interpolation processing unit. Send to 28.
  • the parity check (error detection) is executed in the parity check unit 26, and the pixel data group from which the parity is deleted is rearranged again to the original array as shown in FIG. To the subsequent interpolation processing unit 28.
  • FIG. 6 shows pixel data after parity check and parity removal and rearrangement after the parity check in the endoscope of the first embodiment, and shows the relationship between the pixel data in the block in which an error has occurred and the adjacent pixel data.
  • FIG. 7 is a diagram illustrating pixel data after parity check and parity deletion and rearrangement after parity check in the endoscope of the first embodiment, and interpolation of pixel data in a block in which an error has occurred It is explanatory drawing which showed the mode before and after interpolation.
  • the interpolation processing unit 28 inputs the pixel data rearranged in the original array in the pixel rearrangement unit 27 and acquires the parity check result (error detection presence / absence information and error position information) from the parity check unit 26.
  • the interpolation processing unit 28 executes the following processing based on the parity check result from the parity check unit 26.
  • the interpolation processing unit 28 sets two adjacent pixel data (FIG. 5) after the output values of the pixel data related to the block that is an interpolation candidate in which an error has occurred (block A in the above example) are rearranged. And the average value of the output values of the pixel data on both sides of the pixel data determined as the interpolation candidate in FIG. Note that the pixel data on both sides of the pixel data set as the interpolation candidate is pixel data related to a block that is considered to have no error.
  • the interpolation processing unit 28 calculates the output value of the pixel data related to the block A that is an interpolation candidate in which an error has occurred, and the output value of two adjacent pixel data after being rearranged. When it is determined that the deviation from the average value is small, the interpolation processing unit 28 does not execute the interpolation process and outputs it directly to the subsequent stage.
  • the interpolation processing unit 28 calculates the output value of the pixel data related to the block A which is an interpolation candidate in which an error has occurred, and the output value of two adjacent pixel data after being rearranged. When it is determined that the deviation from the average value is large, the interpolation processing unit 28 performs an interpolation process.
  • the interpolation processing unit 28 replaces the output value of the pixel data related to the block A, which is an interpolation candidate in which an error has occurred, with the average value of the output values of the two adjacent pixel data after the rearrangement.
  • the endoscope 2 of the present embodiment has the following features. That is, first, in the image sensor 21 provided at the distal end portion 7, pixel data in which the pixel arrangement is rearranged according to a predetermined rule is divided into a plurality of pixel data groups, and a predetermined pixel data group is assigned to each block. Is added to the base end side via the cable.
  • the received pixel data is subjected to a parity check for each block, and the pixel data group of each block is rearranged to the original pixel arrangement again.
  • the parity check unit 26 detects an error, the pixel data related to the block in which the error is detected is regarded as an interpolation candidate having an error.
  • the interpolation processing unit 28 compares the output value of the pixel data set as the interpolation candidate with the output value of the adjacent pixel data after being rearranged in the original array, and performs interpolation according to the value.
  • the candidate pixel data is replaced with the average value of the adjacent pixel data.
  • the endoscope 2 transmits the pixel data after rearranging the pixel data in the image sensor 21 in advance so as not to continuously transmit the pixels in which the error has occurred.
  • the error is accurately detected and corrected accurately, so that the pixel data can be transferred accurately.
  • the first embodiment is characterized in that the pixel data output from the sensor unit 22 is rearranged in accordance with an instruction from the readout control unit 23.
  • an example of this rearrangement will be described.
  • 8 to 12 are explanatory views respectively showing examples of rearrangement of pixel data in accordance with an instruction from the read control unit in the endoscope of the first embodiment.
  • the rearrangement is performed by the odd-numbered pixels and the even-numbered pixels in units of pixels with respect to the physical pixel arrangement on the same line in the sensor unit 22.
  • FIG. 11 shows the transfer order for the physical pixel arrangement.
  • the arrangement of the pixels corresponds to each other), and they may be rearranged and transferred as shown in FIG. 12 according to the order.
  • the requirements of the “reordering rules” in the present embodiment are as follows.
  • a rule that makes it possible to disperse the continuous error data into independent error data when the original physical array is restored. It is important that Such a rule can be arbitrarily set by using a conversion table storing a correspondence relationship between a physical arrangement (pixel data number) and a transfer order array (data transmission order).
  • Such an image sensor may erroneously recognize the disturbance noise as V-synchronous control when disturbance noise is applied. If it is erroneously recognized, the synchronization timing between the image sensor and the endoscope system is shifted, and there is a detrimental effect such as a deviation of the displayed image.
  • the following is an example to resolve such a problem. That is, by setting a dead zone that does not accept V synchronization control in the image sensor, when V synchronization control is performed in the dead zone, it is invalidated (the image sensor does not accept).
  • the start / end timing of the dead zone can be set in units of rows and clocks, and the dead zone can be switched between valid / invalid.
  • V synchronization control it is necessary to perform V synchronization control in the following two cases. That is, (1) When the system and the image sensor are operating at the output of their respective clock generators, and when it is desired to correct for each field that the synchronization timing slightly shifts in each field due to the deviation between them, (2) At startup, Or, when the disturbance timing is applied, the synchronization timing of the image sensor is greatly deviated from the synchronization timing of the system, and the timing is to be corrected.
  • the synchronization timing control cannot be performed.
  • the synchronization timing is set in a state where the dead zone setting of the image sensor is once invalidated. After performing the control and controlling to a desired timing, the dead zone may be made effective again.
  • an endoscope can be connected to a plurality of types of processors.
  • an endoscope performs the same operation regardless of the type of a connection destination processor.
  • FIG. 14 is an example of an endoscope system 101 having an endoscope 102 that solves such a problem.
  • the endoscope 102 includes a CMOS image sensor 121 at the distal end, and an image processing unit 113 and a control unit 114 at the connector unit 112.
  • the endoscope 102 can be connected to an NTSC video processor 103A and a PAL video processor 103B.
  • the NTSC video processor and the PAL video processor have different frame rates and different driving frequencies of clocks supplied from the processors.
  • a CMOS image sensor mounted on the endoscope 102 operates by setting an operation mode including a frame rate as a register.
  • the endoscope 102 can change the operation mode according to the difference in the broadcast standard to which the video processor to be connected complies. That is, the endoscope 102 determines whether the video processor to be connected is the NTSC video processor 103A or the PAL video processor 103B, and based on the determination result, the image sensor 121 is controlled by the control signal. These registers are set to switch between NTSC and PAL. At the same time, the endoscope 102 switches the control method of the image processing unit 113 of the connector unit 112 between NTSC and PAL.
  • the endoscope 102 sets the register of the image sensor 121 by a control signal based on whether the video processor to be connected is the NTSC video processor 103A or the PAL video processor 103B, and It is designed to use by switching the rate. Further, the frame rate for driving the image sensor is switched by changing the timing of the synchronization signal.
  • the endoscope 102 is not limited to switching between NTSC and PAL, but can also switch the driving mode of the image sensor, such as pixel addition, according to the connected video processor. This pixel addition may be performed by the image sensor 121 or by the FPGA in the connector unit 112.
  • the readout time of the image sensor can be shortened.
  • the frame rate can be improved, the vertical blanking period becomes longer, and the light emission possible period of the PWM light source desired to emit light during vertical blanking becomes longer and brighter.
  • the dynamic range and AD resolution can be improved in a pseudo manner as compared with pixel addition in the image sensor.
  • the image output from the image sensor may require image processing such as pixel defects.
  • image processing that is not installed in the conventional processor must be performed in the endoscope.
  • the endoscope When the endoscope is connected to the new processor, optimal image processing can be performed by the processor. Therefore, the endoscope may output an image without processing to the processor without performing image processing.
  • Image processing performed in an endoscope when a conventional processor is connected may be equivalent to the image processing installed in the new processor, and the endoscope is simpler because it uses fewer resources than the processor. It may be a typical image processing.
  • CMOS image sensor has a register and a counter inside, unlike a CCD image sensor.
  • necessary register settings are performed from the outside to drive.
  • the image sensor may be reset unintentionally due to static electricity, electric knife, or other disturbance noise, or the internal register may become abnormal, causing image abnormality (image loss).
  • FIG. 15 is an example of an endoscope system 201 having an endoscope 202 that solves such a problem.
  • the endoscope 202 has a CMOS image sensor 221 disposed at the distal end portion, and an FPGA 225 and a drive circuit 247 are disposed in the connector portion 212.
  • a video receiving circuit 241, an image processing circuit 242, a video transmitting circuit 43, a communication control circuit 244, a drive control circuit 245, and an operation state monitor 246 are formed.
  • the connector unit 212 in the endoscope 202 is connected to the video processor 203, and the endoscope 202 is supplied with power, a clock, and the like from the video processor 203, while a video signal from the endoscope 202 toward the video processor 203. Is transmitted.
  • the endoscope 202 having such a configuration has the following characteristics. That is, CMOS image sensor CMOS image sensor register that can be set by communication from outside
  • the CMOS image sensor has a means for externally monitoring the operation status of the CMOS image sensor.
  • the CMOS image sensor has a means for externally controlling driving (power supply, clock, control signal).
  • the endoscope 202 includes: (1) Monitor the operation status of the image sensor.
  • the method may be any of the following. That is, * Check if the video signal from the image sensor is received a: Check that the header and footer information included in the video signal can be read (correct) b: Check that no decoding error such as 8B10B has occurred Check c: Check that the CDR of the video receiving circuit is locked d: Brightness information of the video signal (Abnormal value such as OB value is appropriate or stuck at 0 or the upper limit of AD) * Check the power consumption (current and / or voltage) of the image sensor. A; The power consumption differs between the driving state and the standby state. Monitor the register status a: Periodically read out the register of the operation mode by communication and check whether it indicates the standby status or the operation status b: Video signal block Check whether the register of the operation mode embedded in the ranking indicates the standby state or the operation.

Abstract

This endoscope 2 has an imaging element 21 provided with a parity addition unit 24 for adding parity to pixel data output from a sensor unit 22 in retrieval order controlled by a retrieval control unit 23, and has a connector unit 12 located closer to the proximal end than a cable 41, the connector unit 12 having: an error detection unit (parity check unit) 26 for detecting errors of the pixel data with the parity added thereto; a pixel rearrangement unit 27 for rearranging the pixel data after error detection that are output from the error detection unit 26; and an interpolation process unit 28 for inputting the pixel data rearranged by the pixel rearrangement unit 27 and performing an interpolation process on the basis of information that is output from the error detection unit 26 and that indicates whether or not an error has been detected.

Description

内視鏡Endoscope
 本発明は、内視鏡に関し、特に、固体撮像素子を備える内視鏡に関する。 The present invention relates to an endoscope, and more particularly, to an endoscope including a solid-state image sensor.
 被検体の内部の被写体を撮像する内視鏡、及び、内視鏡により撮像された被写体の観察画像を生成する画像処理装置等を具備する内視鏡システムが、医療分野及び工業分野等において広く用いられている。 An endoscope system including an endoscope that captures an object inside a subject and an image processing device that generates an observation image of the object captured by the endoscope is widely used in the medical field, the industrial field, and the like. It is used.
 このような内視鏡システムにおける内視鏡としては、固体撮像素子として、例えばCMOSイメージセンサを採用し、このCMOSイメージセンサから出力される撮像信号(映像データ)を後段の画像処理装置に対して伝送する内視鏡が広く知られている。 As an endoscope in such an endoscope system, for example, a CMOS image sensor is adopted as a solid-state imaging device, and an imaging signal (video data) output from the CMOS image sensor is sent to an image processing apparatus at a subsequent stage. Transmitting endoscopes are widely known.
 上述したCMOSイメージセンサ等の撮像素子は、一般に、画像処理装置から内視鏡の挿入部内およびユニバーサルコード内に配設されたケーブルを介して、所定の電源電圧の供給および制御信号を受けて駆動されるようになっている。また、撮像素子から出力される撮像信号(映像データ)もこのケーブルを経由して挿入部基端部に配設されたコネクタ部、さらには画像処理装置に向けて転送するようになっている。 The above-described imaging device such as a CMOS image sensor is generally driven by receiving a predetermined power supply voltage and a control signal from an image processing device via a cable disposed in an insertion portion of an endoscope and in a universal cord. It has come to be. In addition, an image pickup signal (video data) output from the image pickup device is also transferred via this cable toward a connector portion disposed at the base end portion of the insertion portion and further to the image processing apparatus.
 ところで内視鏡システムにおいては、内視鏡を使用する際に近傍において電気メス等による処置が併用されることがある。この電気メスは機器の性質上、強度の高いノイズを発生することが避けられず、内視鏡内のケーブルに対しても外乱ノイズが印加される虞があった。すなわち、撮像素子から出力される撮像信号(映像データ)を転送する際に、電気メス等からの外乱ノイズにより短時間に多数の誤りが集中するバースト誤りが発生する虞があった。 Incidentally, in an endoscope system, when an endoscope is used, treatment with an electric knife or the like may be used in the vicinity. Due to the nature of the electric knife, it is inevitable that the electric knife generates high-intensity noise, and disturbance noise may be applied to the cable in the endoscope. That is, when an image signal (video data) output from the image sensor is transferred, there is a possibility that a burst error in which many errors are concentrated in a short time due to disturbance noise from an electric knife or the like may occur.
 このような転送誤りに対応して従来、データ転送の誤り検出および誤り訂正については種々の手法が知られている。 Corresponding to such transfer errors, various methods are conventionally known for data transfer error detection and error correction.
 例えば、データを送信する際に送信データにエラー訂正用のパリティを付加して送信し、受信側において当該パリティと受信データとに誤りの有無のチェックする技術、また、いわゆるECC(Error Check and Correct)技術を用いた誤り訂正技術が広く知られている(日本国特開2012-11123号公報)。 For example, when transmitting data, a parity for error correction is added to the transmission data for transmission, and the reception side checks the parity and the reception data for errors, or so-called ECC (Error Check and Correct ) Error correction technology using the technology is widely known (Japanese Unexamined Patent Publication No. 2012-11123).
 しかしながら、日本国特開2012-11123号公報に示されるようなECC(Error Check and Correct)を用いた誤り訂正技術は、散発的に単独で誤りが発生するランダム誤りに対して有効であるものの、電気メス等からのノイズに起因して短時間に多数の誤りが集中するバースト誤りに対しては的確に訂正することができない虞があった。そして、このような外乱ノイズに起因する転送誤りが生じた場合、転送された画素データに係る画像が大きく乱れてしまうという不具合が生じることとなる。 However, although an error correction technique using ECC (Error Check and Correct) as shown in Japanese Patent Application Laid-Open No. 2012-11123 is effective for random errors that occur sporadically and independently, There is a possibility that the burst error in which many errors are concentrated in a short time due to noise from an electric knife or the like cannot be corrected accurately. When a transfer error due to such disturbance noise occurs, there is a problem that the image related to the transferred pixel data is greatly disturbed.
 本発明は上述した事情に鑑みてなされたものであり、強大な外乱ノイズが印加され得る状況下においても、画素データを確実に転送する内視鏡を提供することを目的とする。 The present invention has been made in view of the above-described circumstances, and an object thereof is to provide an endoscope that reliably transfers pixel data even in a situation where a strong disturbance noise can be applied.
 本発明の一態様の内視鏡は、被検体に挿入する挿入部における先端部に配設された撮像素子と、前記撮像素子より基端側に配設され、当該撮像素子と接続された基端側処理部と、前記撮像素子に設けられ、被写体を撮像して光電変換し、所定の画素データを出力するセンサ部と、前記撮像素子に設けられ、前記センサ部から読み出す前記画素データの順序を指示する読出制御部と、前記撮像素子に設けられ、前記読出制御部に読み出し順序が制御されて出力された前記画素データに所定のパリティを付加するパリティ付加部と、前記基端側処理部に設けられ、前記パリティ付加部において付加された前記パリティに基づいて、当該パリティが付加された前記画素データの誤り検知を行い誤り検知の有無情報を出力すると共に、当該誤り検知後の画素データを出力する誤り検知部と、前記基端側処理部に設けられ、前記誤り検知部から出力される誤り検知後の画素データを並び替える画素並替部と、前記基端側処理部に設けられ、前記画素並替部によって並び替えられた前記画素データを入力すると共に、前記誤り検知部から出力される前記誤り検知の有無情報に基づいて、所定の補間処理を行う補間処理部と、を具備する。 An endoscope according to one aspect of the present invention includes an imaging device disposed at a distal end portion of an insertion portion that is inserted into a subject, and a base that is disposed on a proximal side from the imaging device and connected to the imaging device. An end-side processing unit; a sensor unit that is provided in the image sensor; images a subject, performs photoelectric conversion; and outputs predetermined pixel data; and an order of the pixel data provided in the image sensor and read from the sensor unit A read control unit for instructing the image data, a parity adding unit for adding a predetermined parity to the pixel data output by controlling the read order of the read control unit, and the base end side processing unit. Based on the parity added by the parity adding unit, error detection is performed on the pixel data to which the parity is added and error detection presence / absence information is output, and the error detection is performed. An error detection unit that outputs the pixel data of the pixel, a pixel rearrangement unit that is provided in the base end side processing unit and rearranges the pixel data after error detection output from the error detection unit, and the base end side processing unit An interpolation processing unit that inputs the pixel data rearranged by the pixel rearrangement unit and performs predetermined interpolation processing based on the error detection presence / absence information output from the error detection unit; Are provided.
図1は、本発明の第1の実施形態の内視鏡を含む内視鏡システムの構成を示す図である。FIG. 1 is a diagram illustrating a configuration of an endoscope system including an endoscope according to a first embodiment of the present invention. 図2は、第1の実施形態の内視鏡を含む内視鏡システムの電気的な構成を示すブロック図である。FIG. 2 is a block diagram illustrating an electrical configuration of the endoscope system including the endoscope according to the first embodiment. 図3は、第1の実施形態の内視鏡において、撮像素子から出力する画素データを奇数と偶数に分けて転送する場合の例を示した説明図である。FIG. 3 is an explanatory diagram illustrating an example in which pixel data output from the image sensor is divided into an odd number and an even number in the endoscope of the first embodiment. 図4は、第1の実施形態の内視鏡において、撮像素子から出力する画素データに付加するパリティの一例を示した説明図である。FIG. 4 is an explanatory diagram showing an example of parity added to pixel data output from the image sensor in the endoscope of the first embodiment. 図5は、第1の実施形態の内視鏡において、パリティが付加された画素データに対してパリティチェックした後のパリティ削除および画素データ並び替えの一例を示した説明図である。FIG. 5 is an explanatory diagram illustrating an example of parity deletion and pixel data rearrangement after performing a parity check on pixel data to which parity has been added in the endoscope of the first embodiment. 図6は、第1の実施形態の内視鏡において、パリティチェックした後のパリティ削除および並び替え後の画素データであって、誤りが発生したブロックにおける画素データと隣接する画素データとの関係を示した説明図である。FIG. 6 shows pixel data after parity check and parity removal and rearrangement after the parity check in the endoscope of the first embodiment, and shows the relationship between the pixel data in the block in which an error has occurred and the adjacent pixel data. It is explanatory drawing shown. 図7は、第1の実施形態の内視鏡において、パリティチェックした後のパリティ削除および並び替え後の画素データであって、誤りが発生したブロックにおける画素データの補間前と補間後の様子を示した説明図である。FIG. 7 shows pixel data after parity removal and rearrangement after parity check in the endoscope of the first embodiment, before and after interpolation of pixel data in a block in which an error has occurred. It is explanatory drawing shown. 図8は、第1の実施形態の内視鏡において、読出制御部からの指示による画素データの並び替えの一例を示した説明図である。FIG. 8 is an explanatory diagram illustrating an example of rearrangement of pixel data according to an instruction from the read control unit in the endoscope according to the first embodiment. 図9は、第1の実施形態の内視鏡において、読出制御部からの指示による画素データの並び替えの他の例を示した説明図である。FIG. 9 is an explanatory diagram illustrating another example of rearrangement of pixel data according to an instruction from the read control unit in the endoscope according to the first embodiment. 図10は、第1の実施形態の内視鏡において、読出制御部からの指示による画素データの並び替えの他の例を示した説明図である。FIG. 10 is an explanatory diagram illustrating another example of rearrangement of pixel data according to an instruction from the read control unit in the endoscope according to the first embodiment. 図11は、第1の実施形態の内視鏡において、読出制御部からの指示による画素データの並び替えの他の例における予め用意されたテーブルデータを示した説明図である。FIG. 11 is an explanatory diagram showing table data prepared in advance in another example of rearrangement of pixel data in accordance with an instruction from the read control unit in the endoscope of the first embodiment. 図12は、第1の実施形態の内視鏡において、読出制御部からの指示による画素データの並び替えの他の例であって図11に示すテーブルデータを用いた例を示した説明図である。FIG. 12 is an explanatory diagram showing another example of rearrangement of pixel data according to an instruction from the readout control unit in the endoscope according to the first embodiment, using the table data shown in FIG. is there. 図13は、本発明の第2の実施形態の内視鏡において、撮像素子が外部からの同期タイミング制御信号を受信した際の、撮像素子内部の動作を示したタイミングチャートである。FIG. 13 is a timing chart showing the internal operation of the imaging device when the imaging device receives an external synchronization timing control signal in the endoscope according to the second embodiment of the present invention. 図14は、本発明の第3の実施形態の内視鏡を含む内視鏡システムの構成を示すブロック図である。FIG. 14 is a block diagram illustrating a configuration of an endoscope system including an endoscope according to the third embodiment of the present invention. 図15は、本発明の第4の実施形態の内視鏡を含む内視鏡システムの構成を示すブロック図である。FIG. 15 is a block diagram showing a configuration of an endoscope system including an endoscope according to the fourth embodiment of the present invention.
 以下、図面を参照して本発明の実施形態を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 <第1の実施形態>
 図1は、本発明の第1の実施形態の内視鏡を含む内視鏡システムの構成を示す図であり、図2は、第1の実施形態の内視鏡を含む内視鏡システムの電気的な構成を示すブロック図である。
<First Embodiment>
FIG. 1 is a diagram illustrating a configuration of an endoscope system including an endoscope according to a first embodiment of the present invention, and FIG. 2 is a diagram of an endoscope system including an endoscope according to the first embodiment. It is a block diagram which shows an electric structure.
 図1、図2に示すように、本第1の実施形態の内視鏡を有する内視鏡システム1は、被検体を観察し撮像する内視鏡2と、当該内視鏡2に接続され前記撮像信号を入力し所定の画像処理を施すビデオプロセッサ3と、被検体を照明するための照明光を供給する光源装置4と、撮像信号に応じた観察画像を表示するモニタ装置5と、を有している。 As shown in FIGS. 1 and 2, an endoscope system 1 having an endoscope according to the first embodiment is connected to an endoscope 2 that observes and images a subject, and the endoscope 2. A video processor 3 that inputs the imaging signal and performs predetermined image processing; a light source device 4 that supplies illumination light for illuminating the subject; and a monitor device 5 that displays an observation image corresponding to the imaging signal. Have.
 内視鏡2は、被検体の体腔内等に挿入される細長の挿入部6と、挿入部6の基端側に配設され術者が把持して操作を行う内視鏡操作部10と、内視鏡操作部10の側部から延出するように一方の端部が設けられたユニバーサルコード11と、を有して構成されている。 The endoscope 2 includes an elongated insertion portion 6 that is inserted into a body cavity or the like of a subject, and an endoscope operation portion 10 that is disposed on the proximal end side of the insertion portion 6 and is operated by being grasped by an operator. And a universal cord 11 provided with one end portion so as to extend from the side portion of the endoscope operation unit 10.
 挿入部6は、先端側に設けられた硬質の先端部7と、先端部7の後端に設けられた湾曲自在の湾曲部8と、湾曲部8の後端に設けられた長尺かつ可撓性を有する可撓管部9と、を有して構成されている。 The insertion portion 6 includes a rigid distal end portion 7 provided on the distal end side, a bendable bending portion 8 provided at the rear end of the distal end portion 7, and a long and flexible portion provided at the rear end of the bending portion 8. And a flexible tube portion 9 having flexibility.
 前記ユニバーサルコード11の基端側(すなわち、撮像素子に対しても基端側)にはコネクタ部12が設けられ、当該コネクタ部12は光源装置4に接続されるようになっている。すなわち、コネクタ部12の先端から突出する流体管路の接続端部となる口金(図示せず)と、照明光の供給端部となるライトガイド口金(図示せず)とは光源装置4に着脱自在で接続されるようになっている。 A connector portion 12 is provided on the base end side of the universal cord 11 (that is, the base end side with respect to the image sensor), and the connector portion 12 is connected to the light source device 4. That is, a base (not shown) serving as a connection end of a fluid conduit projecting from the tip of the connector portion 12 and a light guide base (not shown) serving as an illumination light supply end are attached to and detached from the light source device 4. It is designed to be connected freely.
 さらに、前記コネクタ部12の側面に設けた電気接点部には接続ケーブル13の一端が接続されるようになっている。そして、この接続ケーブル13には、例えば内視鏡2における固体撮像素子(CMOSイメージセンサ)21(図2参照)からの撮像信号を伝送する信号線、並びに、固体撮像素子(以下、撮像素子とも記す)を駆動するための制御信号線および電源線が内設され、また、他端のコネクタ部はビデオプロセッサ3に接続されるようになっている。 Furthermore, one end of the connection cable 13 is connected to the electrical contact portion provided on the side surface of the connector portion 12. The connection cable 13 includes, for example, a signal line for transmitting an imaging signal from a solid-state imaging device (CMOS image sensor) 21 (see FIG. 2) in the endoscope 2 and a solid-state imaging device (hereinafter, both imaging devices). A control signal line and a power supply line for driving are described, and the connector at the other end is connected to the video processor 3.
 図2に示すように、本実施形態の内視鏡2は、挿入部6の先端部7に配設された、被写体像を入光するレンズを含む対物光学系(図示せず)と、対物光学系における結像面に配設された撮像素子(CMOSイメージセンサ)21と、を備える。 As shown in FIG. 2, the endoscope 2 according to the present embodiment includes an objective optical system (not shown) including a lens for receiving a subject image disposed at the distal end portion 7 of the insertion portion 6, and an objective. And an image sensor (CMOS image sensor) 21 disposed on an image forming surface in the optical system.
 また内視鏡2は、撮像素子21から延出され、当該撮像素子21から挿入部6、ユニバーサルコード11(図1参照)を経て、コネクタ部12(図2においては、コネクタ部12に配設されたFPGA25)に至るまで配設されたケーブル41を備える。 The endoscope 2 extends from the image sensor 21, and passes through the insertion section 6 and the universal cord 11 (see FIG. 1) from the image sensor 21 to be connected to the connector section 12 (in FIG. 2, the connector section 12. The cable 41 is provided to reach the FPGA 25).
 <撮像素子21>
 撮像素子21は、上述したように本実施形態においてはCMOSイメージセンサにより構成される固体撮像素子である。また、本実施形態においては、撮像素子21は、以下に示すセンサ部22、読出制御部23およびパリティ付加部24を設ける。
<Image sensor 21>
As described above, the imaging element 21 is a solid-state imaging element configured by a CMOS image sensor in the present embodiment. In the present embodiment, the image sensor 21 includes a sensor unit 22, a read control unit 23, and a parity adding unit 24 described below.
 センサ部22は、被写体像を入光し光電変換した後に所定の画素データを出力する。読出制御部23は、前記センサ部22から読み出す画素データの順序を指示する。パリティ付加部24は、前記読出制御部23に読み出し順序が制御されて出力された前記画素データに所定のパリティを付加する。 The sensor unit 22 receives a subject image and photoelectrically converts it, and then outputs predetermined pixel data. The read control unit 23 instructs the order of pixel data read from the sensor unit 22. The parity adding unit 24 adds a predetermined parity to the pixel data output with the reading control unit 23 controlling the reading order.
 また、読出制御部23は、Lをセンサ部22の同一ライン上の物理的な画素配列番号、Nを2以上の整数、L mod Nを、LをNで除算した際の剰余としたとき、L mod Nが一致する同一ライン上の物理的な画素配列の画素データを集めてデータ群を形成し、当該データ群を最小単位として画素データの配列を設定するように当該画素データを読み出すよう前記センサ部22に対して指示を行う。 Further, when the reading control unit 23 uses L as a physical pixel array number on the same line of the sensor unit 22, N is an integer of 2 or more, and L mod N is a remainder when L is divided by N, The pixel data of the physical pixel array on the same line with the same L mod 一致 N is collected to form a data group, and the pixel data is read so as to set the array of pixel data with the data group as the minimum unit. An instruction is given to the sensor unit 22.
 なお、本実施形態においては、N=2の場合を例として挙げる。また、本実施形態においては、読出制御部23の制御下に前記センサ部22における同一ライン上の物理的な画素配列を画素単位で並び替えるようにしたが、画素の並び替えはこれに限らず、他の方式も考えられる。たとえば、ライン単位で並び替える例であってもよい。 In the present embodiment, the case where N = 2 is taken as an example. In this embodiment, the physical pixel arrangement on the same line in the sensor unit 22 is rearranged in units of pixels under the control of the reading control unit 23. However, the rearrangement of pixels is not limited to this. Other schemes are also conceivable. For example, it may be an example of rearrangement in line units.
 ケーブル41は、撮像素子21から延出され、当該撮像素子21から挿入部6、ユニバーサルコード11(図1参照)を経て、コネクタ部12(図2においては、コネクタ部12に配設される。また、ケーブル41は、各種駆動信号(基準クロックCLK、各種同期信号等)を伝送する駆動信号ライン、撮像素子21を駆動するための電源電圧VDDを供給するための電源供給ライン、前記撮像信号を伝送する撮像信号出力線等を内包するケーブルである。 The cable 41 extends from the image sensor 21 and is disposed from the image sensor 21 through the insertion section 6 and the universal cord 11 (see FIG. 1) to the connector section 12 (in FIG. 2, the connector section 12). The cable 41 also includes a drive signal line for transmitting various drive signals (reference clock CLK, various synchronization signals, etc.), a power supply line for supplying a power supply voltage VDD for driving the image sensor 21, and the image signal. It is a cable that includes an imaging signal output line to be transmitted.
 <FPGA25>
 また、本実施形態は、撮像素子21よりケーブル41を経由した前記コネクタ部12に、当該撮像素子21と接続された基端側処理部であるFPGA25が配設されている。FPGA25は、本実施形態においては、いわゆるFPGA(Field Programmable Gate Array)により構成され、ビデオプロセッサ3の制御を受けて撮像素子21に係る各種タイミング調整を行うと共に、当該撮像素子21からの画像データを入力して所定の処理を施した後、ビデオプロセッサ3における画像処理部に対して送出するようになっている。
<FPGA25>
In the present embodiment, an FPGA 25 that is a proximal end processing unit connected to the image sensor 21 is disposed on the connector unit 12 via the cable 41 from the image sensor 21. In this embodiment, the FPGA 25 is configured by a so-called FPGA (Field Programmable Gate Array), adjusts various timings related to the image sensor 21 under the control of the video processor 3, and receives image data from the image sensor 21. After being input and subjected to predetermined processing, it is sent to an image processing unit in the video processor 3.
 さらにFPGA25は、本実施形態においては、以下に詳述するパリティチェック部26、画素並び替え部27および補間処理部28を形成する。 Further, in the present embodiment, the FPGA 25 forms a parity check unit 26, a pixel rearrangement unit 27, and an interpolation processing unit 28, which will be described in detail below.
 パリティチェック部26は、撮像素子21における前記パリティ付加部24において付加されたパリティに基づいて、当該パリティが付加された前記画素データの誤り検知を行い誤り検知の有無情報を出力すると共に、当該誤り検知後の画素データを出力する。 The parity check unit 26 performs error detection of the pixel data to which the parity is added based on the parity added by the parity addition unit 24 in the image sensor 21 and outputs error detection presence / absence information, and also The pixel data after detection is output.
 画素並び替え部27は、前記パリティチェック部26から出力されるパリティチェック(誤り検知)後の画素データを元の配列に並び替える。 The pixel rearrangement unit 27 rearranges the pixel data after parity check (error detection) output from the parity check unit 26 into the original array.
 補間処理部28は、前記画素並び替え部27によって並び替えられた画素データを入力すると共に、前記パリティチェック部26から出力される前記誤り検知の有無情報に基づいて、所定の補間処理を行う。 The interpolation processing unit 28 inputs the pixel data rearranged by the pixel rearrangement unit 27 and performs predetermined interpolation processing based on the error detection presence / absence information output from the parity check unit 26.
 一方、本実施形態の内視鏡システム1は、当該内視鏡2に接続され前記撮像信号を入力し所定の画像処理を施すビデオプロセッサ3を備える。 On the other hand, the endoscope system 1 of the present embodiment includes a video processor 3 that is connected to the endoscope 2 and that inputs the imaging signal and performs predetermined image processing.
 <本実施形態の作用>
 以下、上述の如き構成をなす本実施形態の内視鏡2における誤り検知および誤り訂正の作用について説明する。
<Operation of this embodiment>
Hereinafter, the operation of error detection and error correction in the endoscope 2 of the present embodiment configured as described above will be described.
 図3は、第1の実施形態の内視鏡において、撮像素子から出力する画素データを奇数と偶数に分けて転送する場合の例を示した説明図である。また、図4は、第1の実施形態の内視鏡において、撮像素子から出力する画素データに付加するパリティの一例を示した説明図であり、図5は、第1の実施形態の内視鏡において、パリティが付加された画素データに対してパリティチェックした後のパリティ削除および画素データ並び替えの一例を示した説明図である。 FIG. 3 is an explanatory diagram illustrating an example in which pixel data output from the image sensor is divided into an odd number and an even number in the endoscope of the first embodiment. FIG. 4 is an explanatory diagram showing an example of parity added to pixel data output from the image sensor in the endoscope of the first embodiment, and FIG. 5 is an endoscope of the first embodiment. It is explanatory drawing which showed an example of the parity deletion and pixel data rearrangement after carrying out a parity check with respect to the pixel data to which the parity was added in the mirror.
 本実施形態においては、撮像素子21のセンサ部22から画素データを出力する際、センサ部22は、読出制御部23の制御により、図3に示すように例えば、全400画素のデータを奇数と偶数に分けて転送する。 In the present embodiment, when the pixel data is output from the sensor unit 22 of the image sensor 21, the sensor unit 22 controls the readout control unit 23 to set the data of all 400 pixels as an odd number as shown in FIG. Divide even numbers and transfer.
 なお、以下に示す画素データの並び替えの説明においては、全400画素のデータを転送する例に挙げ、さらに、100画素毎にブロックに分けて転送する例を挙げるが、本願発明の技術思想は、斯様な画素数の画素データの転送、ブロック分割の仕方に限らず、他の画素数の画素データの転送、または、ブロック分割の場合に適用することができる。 In the description of the rearrangement of pixel data shown below, an example is given in which data of all 400 pixels is transferred, and an example in which the data is divided into blocks every 100 pixels is given. However, the technical idea of the present invention is The present invention is not limited to such pixel data transfer and block division methods, but can be applied to transfer of pixel data of other pixel numbers or block division.
 上述のように、本実施形態においては読出制御部23の制御により、センサ部22からは全400画素のデータを奇数と偶数に並び替えて転送するが、パリティ付加部24において当該並び替えられて転送された画素データを受信すると、パリティ付加部24は、図4に示すように、たとえば、100画素毎にパリティを付加する。 As described above, in this embodiment, under the control of the read control unit 23, the data of all 400 pixels are rearranged into odd numbers and even numbers from the sensor unit 22, and transferred by the parity adding unit 24. When the transferred pixel data is received, the parity adding unit 24 adds a parity for every 100 pixels, for example, as shown in FIG.
 具体的に本実施形態においてパリティ付加部24は、400画素のデータを100画素毎に4つのブロックに分け(奇数のブロックA、奇数のブロックB、偶数のブロックC、偶数のブロックD)、それぞれのブロックの画素群に2パリティずつ付加して後段に向けて出力する。 Specifically, in this embodiment, the parity adding unit 24 divides the data of 400 pixels into four blocks for every 100 pixels (odd block A, odd block B, even block C, even block D), respectively. Two parities are added to the pixel group of each block and output toward the subsequent stage.
 前記パリティ付加部24から出力された、パリティが付加された画素データ群は、ケーブル41を経由して基端側処理部であるFPGA25におけるパリティチェック部26に入力される。 The pixel data group to which the parity is added, which is output from the parity adding unit 24, is input via the cable 41 to the parity check unit 26 in the FPGA 25 which is the base-end processing unit.
 パリティチェック部26においては、上述したパリティが付加された画素データ群を入力して所定のパリティチェックを実行し、当該パリティを削除した後、画素並び替え部27に向けて出力する。 In the parity check unit 26, the pixel data group to which the above-described parity is added is input, a predetermined parity check is executed, the parity is deleted, and then output to the pixel rearrangement unit 27.
 パリティチェック部26においては、同時に、誤り検知の有無情報および、誤り位置の情報を補間処理部28に向けて送信する。 At the same time, the parity check unit 26 transmits error detection presence / absence information and error position information to the interpolation processing unit 28.
 ここで、例えば、図5に示すように、ブロックAにおいて外乱ノイズ等により画素データの転送に誤りが生じたとする(図中、×印参照)。このときパリティチェック部26は、このブロックAに係る画素データ(今、当該ブロックAには100画素あるので、この100画素全部のデータ)を誤りが存在する補間候補として、当該情報を補間処理部28に向けて送信する。 Here, for example, as shown in FIG. 5, it is assumed that an error has occurred in the transfer of pixel data due to disturbance noise or the like in block A (see x in the figure). At this time, the parity check unit 26 uses the pixel data relating to the block A (the block A has 100 pixels at this time, the data for all 100 pixels) as an interpolation candidate having an error, and the information is used as an interpolation processing unit. Send to 28.
 一方、画素並び替え部27においては、パリティチェック部26においてパリティチェック(誤り検知)が実行され、かつ、パリティが削除された画素データ群を、図5に示すように再び元の配列に並び替えて後段の補間処理部28に出力する。 On the other hand, in the pixel rearrangement unit 27, the parity check (error detection) is executed in the parity check unit 26, and the pixel data group from which the parity is deleted is rearranged again to the original array as shown in FIG. To the subsequent interpolation processing unit 28.
 <補間処理部での処理>
 次に、パリティチェック部26において誤りが存在する補間候補を検知した際において、補間処理部28における補間処理について説明する。
<Processing in the interpolation processing unit>
Next, the interpolation processing in the interpolation processing unit 28 when the parity check unit 26 detects an interpolation candidate having an error will be described.
 図6は、第1の実施形態の内視鏡において、パリティチェックした後のパリティ削除および並び替え後の画素データであって、誤りが発生したブロックにおける画素データと隣接する画素データとの関係を示した説明図であり、図7は、第1の実施形態の内視鏡において、パリティチェックした後のパリティ削除および並び替え後の画素データであって、誤りが発生したブロックにおける画素データの補間前と補間後の様子を示した説明図である。 FIG. 6 shows pixel data after parity check and parity removal and rearrangement after the parity check in the endoscope of the first embodiment, and shows the relationship between the pixel data in the block in which an error has occurred and the adjacent pixel data. FIG. 7 is a diagram illustrating pixel data after parity check and parity deletion and rearrangement after parity check in the endoscope of the first embodiment, and interpolation of pixel data in a block in which an error has occurred It is explanatory drawing which showed the mode before and after interpolation.
 補間処理部28は、画素並び替え部27において元の配列に並び替えられた画素データを入力すると共に、パリティチェック部26からパリティチェック結果(誤り検知の有無情報および誤り位置情報)を取得する。 The interpolation processing unit 28 inputs the pixel data rearranged in the original array in the pixel rearrangement unit 27 and acquires the parity check result (error detection presence / absence information and error position information) from the parity check unit 26.
 ここで補間処理部28は、パリティチェック部26からのパリティチェック結果に基づいて以下に示す処理を実行する。 Here, the interpolation processing unit 28 executes the following processing based on the parity check result from the parity check unit 26.
 すなわち補間処理部28は、誤りが発生した補間候補であるブロック(先の例においては、ブロックA)に係る画素データの出力値が、並び替えられた後における隣接する2つの画素データ(図5における補間候補とされた画素データの両隣の画素データ)の出力値の平均値と比較する。なお、当該補間候補とされた画素データの両隣の画素データは、誤りが発生していないと考えられるブロックに係る画素データである。 That is, the interpolation processing unit 28 sets two adjacent pixel data (FIG. 5) after the output values of the pixel data related to the block that is an interpolation candidate in which an error has occurred (block A in the above example) are rearranged. And the average value of the output values of the pixel data on both sides of the pixel data determined as the interpolation candidate in FIG. Note that the pixel data on both sides of the pixel data set as the interpolation candidate is pixel data related to a block that is considered to have no error.
 ここで図6に示すように、補間処理部28が、誤りが発生した補間候補であるブロックAに係る画素データの出力値と、並び替えられた後における隣接する2つの画素データの出力値の平均値との乖離が小さいと判断した場合、補間処理部28は補間処理を実行せずそのまま後段に向けて出力する。 Here, as illustrated in FIG. 6, the interpolation processing unit 28 calculates the output value of the pixel data related to the block A that is an interpolation candidate in which an error has occurred, and the output value of two adjacent pixel data after being rearranged. When it is determined that the deviation from the average value is small, the interpolation processing unit 28 does not execute the interpolation process and outputs it directly to the subsequent stage.
 一方、図7に示すように、補間処理部28が、誤りが発生した補間候補であるブロックAに係る画素データの出力値と、並び替えられた後における隣接する2つの画素データの出力値の平均値との乖離が大きいと判断した場合、補間処理部28は補間処理を実行する。 On the other hand, as illustrated in FIG. 7, the interpolation processing unit 28 calculates the output value of the pixel data related to the block A which is an interpolation candidate in which an error has occurred, and the output value of two adjacent pixel data after being rearranged. When it is determined that the deviation from the average value is large, the interpolation processing unit 28 performs an interpolation process.
 すなわち、このとき補間処理部28は、誤りが発生した補間候補であるブロックAに係る画素データの出力値を、並び替えられた後における隣接する2つの画素データの出力値の平均値に置き換える。 That is, at this time, the interpolation processing unit 28 replaces the output value of the pixel data related to the block A, which is an interpolation candidate in which an error has occurred, with the average value of the output values of the two adjacent pixel data after the rearrangement.
 以上説明したように、本実施形態の内視鏡2は以下の特徴をなす。すなわち、まず、先端部7に設けられる撮像素子21において、予め定められた規則により画素配列が並べ替えられた画素データを複数のブロックの画素データ群に分け、それぞれのブロックの画素データ群に所定のパリティを付加した後、ケーブルを経由した基端側に送信する。 As described above, the endoscope 2 of the present embodiment has the following features. That is, first, in the image sensor 21 provided at the distal end portion 7, pixel data in which the pixel arrangement is rearranged according to a predetermined rule is divided into a plurality of pixel data groups, and a predetermined pixel data group is assigned to each block. Is added to the base end side via the cable.
 一方、ケーブルを経由した基端側のFPGA25においては、受信した画素データをブロックごとにパリティチェックを行うと共に、各ブロックの画素データ群を再び元の画素配列に並び替える。そして、パリティチェック部26において誤りを検知した場合、当該誤りを検知したブロックに係る画素データを、誤りが存在する補間候補とみなす。 On the other hand, in the FPGA 25 on the base end side via the cable, the received pixel data is subjected to a parity check for each block, and the pixel data group of each block is rearranged to the original pixel arrangement again. When the parity check unit 26 detects an error, the pixel data related to the block in which the error is detected is regarded as an interpolation candidate having an error.
 その後、補間処理部28において、当該補間候補とされた画素データの出力値と、元の配列に並び替えられた後の隣接する画素データの出力値とを比較し、その値に応じて、補間候補とされた画素データを隣接の画素データの平均値を置き換える。 Thereafter, the interpolation processing unit 28 compares the output value of the pixel data set as the interpolation candidate with the output value of the adjacent pixel data after being rearranged in the original array, and performs interpolation according to the value. The candidate pixel data is replaced with the average value of the adjacent pixel data.
 このように本実施形態の内視鏡2は、誤りが生じた画素を連続的に送信しないように、予め撮像素子21において画素データを並べ替えた後に送信するようにしたので、ケーブル41等に対して、電気メス等による強大な外乱ノイズが印加される状況下においても、誤りを精度よく検知し、かつ、的確に訂正するので、画素データを正確に転送することができる。 As described above, the endoscope 2 according to the present embodiment transmits the pixel data after rearranging the pixel data in the image sensor 21 in advance so as not to continuously transmit the pixels in which the error has occurred. On the other hand, even under a situation where a strong disturbance noise from an electric knife or the like is applied, the error is accurately detected and corrected accurately, so that the pixel data can be transferred accurately.
 <並び替えの例について>
 上述したように、本第1の実施形態においては、読出制御部23の指示によりセンサ部22から出力する画素データを並べ替えることを特徴とする。以下、この並べ替えの例について説明する。
<About sorting examples>
As described above, the first embodiment is characterized in that the pixel data output from the sensor unit 22 is rearranged in accordance with an instruction from the readout control unit 23. Hereinafter, an example of this rearrangement will be described.
 図8~図12は、第1の実施形態の内視鏡において、読出制御部からの指示による画素データの並び替えの例をそれぞれ示した説明図である。 8 to 12 are explanatory views respectively showing examples of rearrangement of pixel data in accordance with an instruction from the read control unit in the endoscope of the first embodiment.
 上述したように、本実施形態においては、図8に示すように、センサ部22における同一ライン上の物理的な画素の並びに対して、画素単位で奇数画素と偶数画素とで並び替えを行う。 As described above, in the present embodiment, as shown in FIG. 8, the rearrangement is performed by the odd-numbered pixels and the even-numbered pixels in units of pixels with respect to the physical pixel arrangement on the same line in the sensor unit 22.
 これに対して、図9に示すように、MOD4(センサ部22における同一ライン上の物理的な画素番号を4で除算した際の剰余)の計算値ごとに並べ替えても良い。 On the other hand, as shown in FIG. 9, it may be rearranged for each calculated value of MOD4 (the remainder when the physical pixel number on the same line in the sensor unit 22 is divided by 4).
 また、図10に示すように、センサ部22における同一ライン上の物理的な画素の並びに対して、一旦、奇数画素と偶数画素とで並び替えを行った後、転送する際に、末尾から転送を行うようにしてもよい。 In addition, as shown in FIG. 10, when the physical pixel arrangement on the same line in the sensor unit 22 is rearranged once with the odd-numbered pixel and the even-numbered pixel, the transfer is performed from the end when the transfer is performed. May be performed.
 さらに、予め図11に示す如きテーブルに、センサ部22における同一ライン上の物理的な画素の並び替え順序を格納しておき(図11は、物理的な画素の並びに対して、転送する際の画素の並びを対応したものである)、その順序にしたがって図12に示すように並び替えて転送するようにしてもよい。 Further, the physical pixel rearrangement order on the same line in the sensor unit 22 is stored in advance in a table as shown in FIG. 11 (FIG. 11 shows the transfer order for the physical pixel arrangement. The arrangement of the pixels corresponds to each other), and they may be rearranged and transferred as shown in FIG. 12 according to the order.
 ここで、本実施形態において「並べ替えの規則」の要件は以下のとおりである。すなわち、データ転送時に連続して画素データに誤りが生じたとしても、元の物理配列に戻した際に、当該連続した誤りデータを独立した誤りデータへと分散させることを可能とするような規則であることが重要である。このような規則は、物理的配置(画素データの番号)と転送順序配列(データの送信順番)とを対応関係を記憶した変換テーブルを用いることで任意に設定することができる。 Here, the requirements of the “reordering rules” in the present embodiment are as follows. In other words, even if there is an error in the pixel data continuously at the time of data transfer, a rule that makes it possible to disperse the continuous error data into independent error data when the original physical array is restored. It is important that Such a rule can be arbitrarily set by using a conversion table storing a correspondence relationship between a physical arrangement (pixel data number) and a transfer order array (data transmission order).
 ところで、従来、挿入部先端部に配設されたイメージセンサと内視鏡システムとの同期をとるために垂直同期タイミングの制御(V同期制御)をすることができるイメージセンサが知られている(日本国特許第5356630号)。 By the way, conventionally, an image sensor capable of controlling the vertical synchronization timing (V synchronization control) in order to synchronize the image sensor disposed at the distal end portion of the insertion portion with the endoscope system is known ( Japanese Patent No. 5356630).
 しかしながらこのようなイメージセンサは、外乱ノイズが印加された際、誤って当該外乱ノイズをV同期制御と誤認識する虞があった。誤認識すると、イメージセンサと内視鏡システムとの同期タイミングがずれてしまい、表示される画像のズレが発生するなど弊害がある。 However, such an image sensor may erroneously recognize the disturbance noise as V-synchronous control when disturbance noise is applied. If it is erroneously recognized, the synchronization timing between the image sensor and the endoscope system is shifted, and there is a detrimental effect such as a deviation of the displayed image.
 このような不具合を解消する解消する例を以下に示す。すなわち、イメージセンサにV同期制御を受け付けない不感帯を設定することで、不感帯においてV同期制御がされた場合にそれを無効化(イメージセンサが受け付けない)ようにする。ここで、不感帯の開始・終了タイミングは、行単位、クロック単位で設定可能であり、また、不感帯の設定は有効/無効を切り替えられるようになっている。 The following is an example to resolve such a problem. That is, by setting a dead zone that does not accept V synchronization control in the image sensor, when V synchronization control is performed in the dead zone, it is invalidated (the image sensor does not accept). Here, the start / end timing of the dead zone can be set in units of rows and clocks, and the dead zone can be switched between valid / invalid.
 ここで、V同期制御をする必要があるのは、以下の2種類の場合である。すなわち、
 (1)システムとイメージセンサがそれぞれのクロック生成機の出力で動いており、両者の偏差により各フィールドで少しずつ同期タイミングがずれていくのをフィールドごとに補正をしたい場合
 (2)起動時、または、外乱ノイズ印加等でイメージセンサの同期タイミングがシステムの同期タイミングから大きくずれている状態となり、タイミングを補正したい場合
 である。
Here, it is necessary to perform V synchronization control in the following two cases. That is,
(1) When the system and the image sensor are operating at the output of their respective clock generators, and when it is desired to correct for each field that the synchronization timing slightly shifts in each field due to the deviation between them, (2) At startup, Or, when the disturbance timing is applied, the synchronization timing of the image sensor is greatly deviated from the synchronization timing of the system, and the timing is to be corrected.
 主に上記(1)の場合には、1フィールドでのずれ量の最大値は設計的に決まっているため、どの範囲で同期タイミングを制御するかは決まっている。このため、その範囲のみを同期タイミング制御を受け入れるようにし、それ以外の範囲に不感帯を設定して同期タイミング制御が来た場合は無視することにより、外乱での誤動作の確率を減らせることができる。 Mainly, in the case of (1) above, since the maximum deviation amount in one field is determined by design, it is determined in which range the synchronization timing is controlled. For this reason, it is possible to reduce the probability of malfunction due to disturbance by accepting synchronization timing control only in that range and ignoring the synchronization timing control when the dead zone is set in other ranges. .
 一方、上記(2)の場合には、不感帯が有効となっていると、同期タイミング制御が行えないが、図13に示すように、イメージセンサの不感帯設定を一旦、無効にした状態で同期タイミング制御を行い、所望のタイミングに制御できた後は、また不感帯を有効にすれば良い。 On the other hand, in the case of (2), if the dead zone is valid, the synchronization timing control cannot be performed. However, as shown in FIG. 13, the synchronization timing is set in a state where the dead zone setting of the image sensor is once invalidated. After performing the control and controlling to a desired timing, the dead zone may be made effective again.
 ところで、一般に、内視鏡は複数の種類のプロセッサに接続可能となっている。ここで、従来は、内視鏡は、接続先のプロセッサの種類によらず同じ動作をしていた。 By the way, in general, an endoscope can be connected to a plurality of types of processors. Here, conventionally, an endoscope performs the same operation regardless of the type of a connection destination processor.
 このような状況において、互換性を保つため、また、画質を向上させるために、接続先のプロセッサに応じて内視鏡の動作を切り替えることを可能とする内視鏡が望まれていた。 In such a situation, in order to maintain compatibility and improve image quality, an endoscope capable of switching the operation of the endoscope according to the connected processor has been desired.
 図14は、係る課題を解決する内視鏡102を有する内視鏡システム101の例である。図14に示すように、当該内視鏡102は、先端部にCMOSイメージセンサ121を配設し、また、コネクタ部112には画像処理部113と制御部114を配設する。 FIG. 14 is an example of an endoscope system 101 having an endoscope 102 that solves such a problem. As shown in FIG. 14, the endoscope 102 includes a CMOS image sensor 121 at the distal end, and an image processing unit 113 and a control unit 114 at the connector unit 112.
 この内視鏡102は、NTSC方式のビデオプロセッサ103Aと、PAL方式のビデオプロセッサ103Bとに接続可能である。ここで、NTSC方式のビデオプロセッサとPAL方式のビデオプロセッサとでは、フレームレートが異なり、また、プロセッサから供給されるクロックの駆動周波数も異なることが知られている。一方、当該内視鏡102に搭載するようなCMOSイメージセンサは、フレームレートを含む動作モードをレジスタ設定することで動作するようになっている。 The endoscope 102 can be connected to an NTSC video processor 103A and a PAL video processor 103B. Here, it is known that the NTSC video processor and the PAL video processor have different frame rates and different driving frequencies of clocks supplied from the processors. On the other hand, a CMOS image sensor mounted on the endoscope 102 operates by setting an operation mode including a frame rate as a register.
 このような状況下において内視鏡102は、接続するビデオプロセッサが準拠する放送規格の違いによって動作モードを変更可能としている。すなわち、内視鏡102は、接続するビデオプロセッサが、NTSC方式のビデオプロセッサ103Aであるか、PAL方式のビデオプロセッサ103Bであるかを判別し、この判別結果に基づいて、制御信号によってイメージセンサ121のレジスタを設定して、NTSCとPALとを切り替えて使用するようなっている。内視鏡102は同時に、コネクタ部112の画像処理部113の制御方法もNTSCとPALとで切り替えるようになっている。 Under such circumstances, the endoscope 102 can change the operation mode according to the difference in the broadcast standard to which the video processor to be connected complies. That is, the endoscope 102 determines whether the video processor to be connected is the NTSC video processor 103A or the PAL video processor 103B, and based on the determination result, the image sensor 121 is controlled by the control signal. These registers are set to switch between NTSC and PAL. At the same time, the endoscope 102 switches the control method of the image processing unit 113 of the connector unit 112 between NTSC and PAL.
 さらに内視鏡102は、接続するビデオプロセッサが、NTSC方式のビデオプロセッサ103Aであるか、PAL方式のビデオプロセッサ103Bであるかに基づいて、制御信号によってイメージセンサ121のレジスタを設定して、フレームレートを切り替えて使用するようになっている。また、同期信号のタイミングを変更してイメージセンサ駆動のフレームレートを切り替えるようになっている。 Furthermore, the endoscope 102 sets the register of the image sensor 121 by a control signal based on whether the video processor to be connected is the NTSC video processor 103A or the PAL video processor 103B, and It is designed to use by switching the rate. Further, the frame rate for driving the image sensor is switched by changing the timing of the synchronization signal.
 また、内視鏡102は、NTSCとPALとの切り替えに限らず、接続するビデオプロセッサに応じて、画素加算など、イメージセンサの駆動モードを切り替えることもできる。この画素加算は、イメージセンサ121で行っても、また、コネクタ部112におけるFPGAで行ってもよい。 Further, the endoscope 102 is not limited to switching between NTSC and PAL, but can also switch the driving mode of the image sensor, such as pixel addition, according to the connected video processor. This pixel addition may be performed by the image sensor 121 or by the FPGA in the connector unit 112.
 ここで、イメージセンサ121において画素加算をすると、イメージセンサの読み出し時間を短くすることが出来る。このことにより、フレームレートの向上が図れ、また、垂直ブランキング期間が長くなり垂直ブランキング中に発光させたいPWM光源の発光可能期間が長くなり明るくなる。 Here, if pixels are added in the image sensor 121, the readout time of the image sensor can be shortened. As a result, the frame rate can be improved, the vertical blanking period becomes longer, and the light emission possible period of the PWM light source desired to emit light during vertical blanking becomes longer and brighter.
 一方、コネクタ部のFPGA内で画素加算をすると、イメージセンサにおいて画素加算を行うよりも、ダイナミックレンジ、AD分解能を擬似的に向上することができる。 On the other hand, if pixel addition is performed in the FPGA of the connector unit, the dynamic range and AD resolution can be improved in a pseudo manner as compared with pixel addition in the image sensor.
 ところで、イメージセンサから出力される画像は画素欠陥などの画像処理が必要な場合がある。従来のプロセッサに接続したときは、従来のプロセッサに搭載していない画像処理は内視鏡内で実施する必要がある。内視鏡を新プロセッサに接続したときには、プロセッサで最適な画像処理を実施できるので、内視鏡では画像処理はせずにプロセッサへは未加工のまま画像を出力するようにしてもよい。従来プロセッサ接続時に内視鏡内で行う画像処理は、新プロセッサに搭載している画像処理と同等の処理を行ってもよいし、プロセッサに対して内視鏡の方が、リソースが少ないため簡易的な画像処理としても良い。 By the way, the image output from the image sensor may require image processing such as pixel defects. When connected to a conventional processor, image processing that is not installed in the conventional processor must be performed in the endoscope. When the endoscope is connected to the new processor, optimal image processing can be performed by the processor. Therefore, the endoscope may output an image without processing to the processor without performing image processing. Image processing performed in an endoscope when a conventional processor is connected may be equivalent to the image processing installed in the new processor, and the endoscope is simpler because it uses fewer resources than the processor. It may be a typical image processing.
 ところで、一般に、CMOSイメージセンサは、CCDイメージセンサとは異なり、内部にレジスタおよびカウンタを有する。通常は、起動時に初期リセットをかけた上で外部から必要なレジスタ設定を行って駆動する。 By the way, in general, a CMOS image sensor has a register and a counter inside, unlike a CCD image sensor. In general, after initial reset at the time of startup, necessary register settings are performed from the outside to drive.
 またイメージセンサは、静電気、電気メスその他外乱ノイズによって、意図せずリセットがかかってしまう、または、内部のレジスタが異常になることによって、画像異常(画像消失)することがある。 Also, the image sensor may be reset unintentionally due to static electricity, electric knife, or other disturbance noise, or the internal register may become abnormal, causing image abnormality (image loss).
 このような問題点に対して、従来、
 (1)再起動可能な状態の場合は、イメージセンサを再起動する
 (2)再起動不可能な状態の場合は、イメージセンサの駆動(電源、クロック、制御信号)を停止させる
 ことを充たす内視鏡が望まれていた。
Conventionally, for such problems,
(1) If the image sensor can be restarted, restart the image sensor. (2) If the image sensor cannot be restarted, stop driving the image sensor (power supply, clock, control signal). An endoscope was desired.
 図15は、係る課題を解決する内視鏡202を有する内視鏡システム201の例である。図15に示すように、当該内視鏡202は、先端部にCMOSイメージセンサ221を配設し、また、コネクタ部212にはFPGA225と、駆動回路247が配設されている。 FIG. 15 is an example of an endoscope system 201 having an endoscope 202 that solves such a problem. As shown in FIG. 15, the endoscope 202 has a CMOS image sensor 221 disposed at the distal end portion, and an FPGA 225 and a drive circuit 247 are disposed in the connector portion 212.
 前記FPGA225には、映像受信回路241、画像処理回路242、映像送信回路43、通信制御回路244、駆動制御回路245、および、動作状態モニタ246が形成されている。 In the FPGA 225, a video receiving circuit 241, an image processing circuit 242, a video transmitting circuit 43, a communication control circuit 244, a drive control circuit 245, and an operation state monitor 246 are formed.
 また、内視鏡202におけるコネクタ部212はビデオプロセッサ203に接続され、内視鏡202はビデオプロセッサ203から電源、クロック等が供給され、一方、内視鏡202からビデオプロセッサ203に向けて映像信号が伝送されるようになっている。 In addition, the connector unit 212 in the endoscope 202 is connected to the video processor 203, and the endoscope 202 is supplied with power, a clock, and the like from the video processor 203, while a video signal from the endoscope 202 toward the video processor 203. Is transmitted.
 斯様な構成をなす内視鏡202は、以下の特徴をなす。すなわち、
 CMOSイメージセンサ
 外部から通信により設定可能なCMOSイメージセンサのレジスタ
 CMOSイメージセンサの動作状況を外部からモニタする手段
 CMOSイメージセンサの駆動(電源、クロック、制御信号)を外部から制御する手段
 を有する。
The endoscope 202 having such a configuration has the following characteristics. That is,
CMOS image sensor CMOS image sensor register that can be set by communication from outside The CMOS image sensor has a means for externally monitoring the operation status of the CMOS image sensor. The CMOS image sensor has a means for externally controlling driving (power supply, clock, control signal).
 より具体的に当該内視鏡202は、
 (1)イメージセンサの動作状況をモニタする。方法は以下のいずれでも良い。すなわち、
 ※イメージセンサからの映像信号が受信できているかを確認する
  a;映像信号に含まれるヘッダやフッタの情報が読み取れる(正しい)ことを確認する
  b;8B10B等のデコードエラーが発生していないことを確認する
  c;映像受信回路のCDRがロック状態であることを確認する
  d;映像信号の明るさ情報(OB値など既知の値が適切か、0またはADの上限値で張り付いているなど異常な状態でないか)を確認する
 ※イメージセンサの消費電力(電流 and/or 電圧)を確認する
  a;駆動状態とスタンバイ状態とでは消費電力が異なるので、電力で動作状況を判断する
 ※イメージセンサのレジスタ状態をモニタする
  a;通信で動作モードのレジスタを定期的に読み出し、スタンバイ状態か動作中かいずれを示しているかを確認する
  b;映像信号のブランキング中に埋め込まれている動作モードのレジスタがスタンバイ状態か動作中かいずれを示しているかを確認する。
More specifically, the endoscope 202 includes:
(1) Monitor the operation status of the image sensor. The method may be any of the following. That is,
* Check if the video signal from the image sensor is received a: Check that the header and footer information included in the video signal can be read (correct) b: Check that no decoding error such as 8B10B has occurred Check c: Check that the CDR of the video receiving circuit is locked d: Brightness information of the video signal (Abnormal value such as OB value is appropriate or stuck at 0 or the upper limit of AD) * Check the power consumption (current and / or voltage) of the image sensor. A; The power consumption differs between the driving state and the standby state. Monitor the register status a: Periodically read out the register of the operation mode by communication and check whether it indicates the standby status or the operation status b: Video signal block Check whether the register of the operation mode embedded in the ranking indicates the standby state or the operation.
 (2)イメージセンサが正常に動作していないと判断する。すなわち、
 ※外乱ノイズ等で映像信号、レジスタが正しく確認できていない場合を考慮し、1回異常状態を検出しただけでは判断せずに、数フィールド間異常が継続した場合にのみ、動作していないと判断する。
(2) It is determined that the image sensor is not operating normally. That is,
* Considering the case where the video signal and register are not correctly confirmed due to disturbance noise, etc., it is not judged only by detecting an abnormal state once, and it is not operating only when the abnormality continues for several fields. to decide.
 (3)イメージセンサの再起動を行う。すなわち、
 ※最初にソフトウェアリセットを行った上で、必要なレジスタを設定する。その後、(1)に戻る。静電気が印加された直後、または、電気メス印加中の場合、1回では再起動に成功しなくても2回目以降で再起動に成功する可能性があるため、イメージセンサが正常動作をしなければ(1)~(3)を繰り返す。
(3) Restart the image sensor. That is,
* Perform the software reset first and set the necessary registers. Thereafter, the process returns to (1). Immediately after static electricity is applied, or when an electrosurgical knife is being applied, the image sensor must operate normally because it may succeed in the second and subsequent restarts even if the restart is not successful once. Repeat steps (1) to (3).
 (4)イメージセンサの駆動を停止する。すなわち、
 ※一定回数再起動を行っても復帰しない場合は、再起動不可能であると判断し、イメージセンサの駆動(電源、クロック、制御信号)を停止する。このとき、ユーザーへ異常状態であるとエラー報知を行っても良い。
(4) Stop driving the image sensor. That is,
* If it does not recover even after a certain number of restarts, it is determined that restart is not possible and the image sensor drive (power supply, clock, control signal) is stopped. At this time, an error notification may be made to the user that the state is abnormal.
 本発明によれば、強大な外乱ノイズが印加され得る状況下においても、画素データを確実に転送する内視鏡を提供することができる。 According to the present invention, it is possible to provide an endoscope that reliably transfers pixel data even under a situation where a strong disturbance noise can be applied.
 本発明は、上述した実施形態に限定されるものではなく、本発明の要旨を変えない範囲において、種々の変更、改変等が可能である。 The present invention is not limited to the above-described embodiment, and various changes and modifications can be made without departing from the scope of the present invention.
 本出願は、2018年3月23日に日本国に出願された特願2018-57100号を優先権主張の基礎として出願するものであり、上記の開示内容は、本願明細書、請求の範囲に引用されるものとする。 This application is filed on the basis of priority claim of Japanese Patent Application No. 2018-57100 filed in Japan on March 23, 2018. The above disclosure is included in the present specification and claims. Shall be quoted.

Claims (4)

  1.  被検体に挿入する挿入部における先端部に配設された撮像素子と、
     前記撮像素子より基端側に配設され、当該撮像素子と接続された基端側処理部と、
     前記撮像素子に設けられ、被写体を撮像して光電変換し、所定の画素データを出力するセンサ部と、
     前記撮像素子に設けられ、前記センサ部から読み出す前記画素データの順序を指示する読出制御部と、
     前記撮像素子に設けられ、前記読出制御部に読み出し順序が制御されて出力された前記画素データに所定のパリティを付加するパリティ付加部と、
     前記基端側処理部に設けられ、前記パリティ付加部において付加された前記パリティに基づいて、当該パリティが付加された前記画素データの誤り検知を行い誤り検知の有無情報を出力すると共に、当該誤り検知後の画素データを出力する誤り検知部と、
     前記基端側処理部に設けられ、前記誤り検知部から出力される誤り検知後の画素データを並び替える画素並替部と、
     前記基端側処理部に設けられ、前記画素並替部によって並び替えられた前記画素データを入力すると共に、前記誤り検知部から出力される前記誤り検知の有無情報に基づいて、所定の補間処理を行う補間処理部と、
     を具備することを特徴とする内視鏡。
    An imaging device disposed at the distal end of the insertion portion to be inserted into the subject;
    A proximal-side processing unit disposed on the proximal side from the imaging device and connected to the imaging device;
    A sensor unit that is provided in the image sensor, images a subject, performs photoelectric conversion, and outputs predetermined pixel data;
    A readout control unit that is provided in the imaging device and instructs the order of the pixel data to be read from the sensor unit;
    A parity adding unit that is provided in the image sensor and adds a predetermined parity to the pixel data that is output after the reading order is controlled by the reading control unit;
    Based on the parity added by the parity adding unit provided in the base end side processing unit, error detection of the pixel data to which the parity is added is performed and error detection presence / absence information is output, and the error An error detection unit that outputs pixel data after detection;
    A pixel rearrangement unit that is provided in the proximal side processing unit and rearranges pixel data after error detection output from the error detection unit;
    A predetermined interpolation process is performed based on the presence / absence information of the error detection output from the error detection unit and the pixel data rearranged by the pixel rearrangement unit provided in the base end side processing unit An interpolation processing unit for performing
    An endoscope comprising:
  2.  前記読出制御部は、L mod Nが一致する物理的な画素配列番号の前記画素データを集めてデータ群を形成し、当該データ群を最小単位として前記画素データの配列を設定するように当該画素データを読み出すよう前記センサ部に対して指示を行う
     ことを特徴とする請求項1に記載の内視鏡。
     但し、Lは前記センサ部の物理的な画素配列番号であり、Nは2以上の整数であり、L mod NはLをNで除算した際の剰余である。
    The read control unit collects the pixel data of physical pixel array numbers having the same L mod N to form a data group, and sets the pixel data array so that the data group is the minimum unit. The endoscope according to claim 1, wherein an instruction is given to the sensor unit to read data.
    However, L is a physical pixel array number of the sensor unit, N is an integer of 2 or more, and L mod N is a remainder when L is divided by N.
  3.  前記読出制御部は、前記センサ部の物理的な画素配列番号に対して、隣接する画素を含まない所定の規則性をもって前記画素配列を変換して設定するように当該画素データを読み出すよう前記センサ部に対して指示を行う
     ことを特徴とする請求項1に記載の内視鏡。
    The reading control unit reads the pixel data so as to convert and set the pixel array with a predetermined regularity that does not include adjacent pixels with respect to the physical pixel array number of the sensor unit. The endoscope according to claim 1, wherein an instruction is given to a unit.
  4.  前記パリティ付加部は、複数の画素データ毎にパリティ付与し、
     前記補間処理部は、上記誤り検知部から出力された前記誤り検知有無情報に基づいて、誤りが所定の閾値を超える状態である場合は、誤りが発生したデータ群に物理的な画素配列が隣接した画素データを利用して誤り画素の訂正を行う
     ことを特徴とする請求項2に記載の内視鏡。
    The parity adding unit provides parity for each of a plurality of pixel data,
    When the error exceeds a predetermined threshold based on the error detection presence / absence information output from the error detection unit, the interpolation processing unit has a physical pixel array adjacent to the data group in which the error has occurred. The endoscope according to claim 2, wherein error pixels are corrected using the pixel data obtained.
PCT/JP2018/042859 2018-03-23 2018-11-20 Endoscope WO2019181064A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-057100 2018-03-23
JP2018057100 2018-03-23

Publications (1)

Publication Number Publication Date
WO2019181064A1 true WO2019181064A1 (en) 2019-09-26

Family

ID=67986104

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/042859 WO2019181064A1 (en) 2018-03-23 2018-11-20 Endoscope

Country Status (1)

Country Link
WO (1) WO2019181064A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08195953A (en) * 1995-01-17 1996-07-30 Kokusai Electric Co Ltd Picture communication method and device
JPH09182068A (en) * 1995-12-22 1997-07-11 Kokusai Electric Co Ltd Image transmission method
JP2011254900A (en) * 2010-06-07 2011-12-22 Fujifilm Corp Endoscope system
WO2013128767A1 (en) * 2012-03-01 2013-09-06 オリンパスメディカルシステムズ株式会社 Imaging system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08195953A (en) * 1995-01-17 1996-07-30 Kokusai Electric Co Ltd Picture communication method and device
JPH09182068A (en) * 1995-12-22 1997-07-11 Kokusai Electric Co Ltd Image transmission method
JP2011254900A (en) * 2010-06-07 2011-12-22 Fujifilm Corp Endoscope system
WO2013128767A1 (en) * 2012-03-01 2013-09-06 オリンパスメディカルシステムズ株式会社 Imaging system

Similar Documents

Publication Publication Date Title
JP5037731B2 (en) Endoscopic image processing apparatus and endoscope system
US8908022B2 (en) Imaging apparatus
JP5356632B1 (en) Imaging system
US8902304B2 (en) Endoscope system
EP2094002A2 (en) Electronic communication system and endoscope system
JP2015160098A (en) endoscope system
JP2013094269A (en) Endoscope apparatus
WO2017010114A1 (en) Image pickup device and electronic endoscope system
JP6087037B1 (en) Image data transmission system
JP5926980B2 (en) Imaging apparatus and imaging system
WO2019181064A1 (en) Endoscope
WO2018220940A1 (en) Image capture device
WO2022029954A1 (en) Medical system and treatment protocol control method
US20210145250A1 (en) Imaging system and endoscopic device
WO2017168785A1 (en) Endoscope
US10531026B2 (en) Endoscope system
CN107534743B (en) Image pickup apparatus
JP6482745B2 (en) Imaging apparatus and endoscope system
JPWO2019230093A1 (en) Imaging system
US20210113074A1 (en) Endoscope, driving method, and endoscope system
WO2016103878A1 (en) Endoscope
JP4656842B2 (en) Endoscope connector device
JP6833615B2 (en) Endoscope system
JP6270497B2 (en) Electronic endoscope processor and electronic endoscope system
JP2010279507A (en) Electronic endoscopic system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18910862

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18910862

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP