CA2773795C - Methods and apparatus for image processing in wireless capsule endoscopy - Google Patents

Methods and apparatus for image processing in wireless capsule endoscopy Download PDF

Info

Publication number
CA2773795C
CA2773795C CA2773795A CA2773795A CA2773795C CA 2773795 C CA2773795 C CA 2773795C CA 2773795 A CA2773795 A CA 2773795A CA 2773795 A CA2773795 A CA 2773795A CA 2773795 C CA2773795 C CA 2773795C
Authority
CA
Canada
Prior art keywords
color
channel
light source
image
sample value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA2773795A
Other languages
French (fr)
Other versions
CA2773795A1 (en
Inventor
Tareq Hasan Khan
Khan Arif Wahid
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Saskatchewan
Original Assignee
University of Saskatchewan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Saskatchewan filed Critical University of Saskatchewan
Priority to CA2773795A priority Critical patent/CA2773795C/en
Publication of CA2773795A1 publication Critical patent/CA2773795A1/en
Application granted granted Critical
Publication of CA2773795C publication Critical patent/CA2773795C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00011Operational features of endoscopes characterised by signal transmission
    • A61B1/00016Operational features of endoscopes characterised by signal transmission using wireless means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00025Operational features of endoscopes characterised by power management
    • A61B1/00036Means for power saving, e.g. sleeping mode

Abstract

Methods and apparatus for image processing suitable for use in wireless capsule endoscopy are provided. The image processing techniques exploit characteristic features of endoscopic images to enable low complexity compression. A color space conversion, coupled with lossless predictive coding and variable length coding are employed. Sub-sampling and clipping may also be used. The described image processing can be used both with both white-band imaging and narrow-band-imaging.

Description

=
. , .
Title: METHODS AND APPARATUS FOR IMAGE PROCESSING IN WIRELESS
CAPSULE ENDOSCOPY
Field [1] The described embodiments relate to methods and apparatus for image processing and, in particular, to image processing suitable for use in images having a dominant color, such as those produced in wireless capsule-based endoscopy.
Background
[2] In the United States alone, over 3 million people suffer from gastrointestinal (GI) diseases annually. The cause of these diseases can be difficult to diagnose, and is never found in over one-third of cases. Endoscopy is a significant medical diagnostic technique that can assist in detecting the cause of GI disease, however conventional endoscopy involves traversing portions of the GI tract with a wired device, which can be uncomfortable and unpleasant for the patient.
Summary
[3] In a first broad aspect, there is provided a method of processing an image captured using a color image sensor, the image comprising a plurality of samples in a plurality of color channels, the method comprising: generating a luma channel based on the plurality of color channels; generating a first chroma channel based on a difference between the luma channel and a first color channel in the plurality of color channels; generating a second chroma channel based on a difference between the luma channel and a second color channel in the plurality of color channels;
generating, using a proce¨ssor, a plurality of predicted sample values for the luma channel and the first and second chroma channels using a lossless predictive coding mode; computing a plurality of difference values between the plurality of predicted sample values and the respective generated sample values; and variable length coding the plurality of difference values to produce a processed image.
[4] Each sample value of the luma channel may be generatable using (e.g., generated using only) addition and shift operations. Each sample value of the first and second chroma channels may be generatable using (e.g., generated using only) addition, negation and shift operations.
¨ 1 ¨

. . .
[5] Each sample value of the luma channel may be generated based on a summation of corresponding sample values in the plurality of color channels.
The summation may be of: a first color channel sample value bitshifted once to divide by two; a second color channel sample value bitshifted twice to divide by four;
and a remaining color channel sample value bitshifted twice to divide by four.
[6] The difference between the luma channel and the first color channel may be computed based on: the luma sample value bitshifted once to divide by two; and the first color channel sample value bitshifted once to divide by two.
[7] The difference between the luma channel and the second color channel may be computed by: computing a sum of the first color channel sample value and the remaining color channel sample value, bitshifting the sum three times to divide by eight, and subtracting from the bitshifted sum the second color channel sample value bitshifted twice to divide by four.
[8] The image may comprise a dominant color, and the first and second color channels may correspond to colors other than the dominant color.
[9] The color image sensor may be an RGB sensor, and the first color channel may be a green color channel and the second color channel may be a blue color channel.
[10] The method may further comprise subsampling the first chroma channel relative to the luma channel prior to generating the plurality of predicted sample values.
[11] The method may further comprise subsampling the second chroma channel relative to the luma channel prior to generating the plurality of predicted sample values.
[12] The method may further comprise clipping at least one portion of the image prior to generating the plurality of predicted sample values.
[13] The lossless predictive coding mode may be a JPEG lossless predictive coding mode. The JPEG lossless predictive coding mode may be left pixel prediction.
[14] The plurality of difference values may be variable length coded using Golomb-Rice coding.
[15] In another broad aspect, there is provided a method of generating an endoscopic image for wireless transmission, the method comprising:
illuminating a diagnostic area using at least one light source; capturing, using a color image ¨2¨

sensor, an image of the diagnostic area under illumination, the image comprising a plurality of samples in a plurality of color channels; and processing the image according to the described methods to produce the endoscopic image for wireless transmission.
[16] The diagnostic area may be illuminated with a wide spectrum light source.or at least one narrow band light source.
[17] The at least one light source may comprise a wide spectrum light source and at least one narrow band light source, and the method may further comprise switching between the wide spectrum light source and the at least one narrow band light source.
[18] The at least one light source may comprise a wide spectrum light source, and the method may further comprise switching between a wide spectrum imaging mode and a narrow band imaging mode.
[19] The at least one narrow band light source may comprise a green light source or a blue light source.
[20] In another broad aspect, there is provided an apparatus for processing an image captured using a color image sensor, the image comprising a plurality of samples in a plurality of color channels, the apparatus comprising: a color conversion module configured to: generate a luma channel based on the plurality of color channels; generate a first chroma channel based on a difference between the luma channel and a first color channel in the plurality of color channels; and generate a second chroma channel based on a difference between the luma channel and a second color channel in the plurality of color channels; a predictive encoder configured to: generate a plurality of predicted sample values for the luma channel and the first and second chroma channels using a lossless predictive coding mode;
and compute a plurality of difference values between the plurality of predicted sample values and the respective generated sample values; and a variable length coder configured to encode the plurality of difference values to produce a processed image.
[21] Each sample value of the luma channel may be generatable using (e.g., generated using only) addition and shift operations. Each sample value of the first and second chroma channels may be generatable using (e.g., generated using only) addition, negation and shift operations.
¨3¨
[22] Each sample value of the luma channel may be generated based on a summation of corresponding sample values in the plurality of color channels.
The summation may be of: a first color channel sample value bitshifted once to divide by two; a second color channel sample value bitshifted twice to divide by four;
and a remaining color channel sample value bitshifted twice to divide by four.
[23] The difference between the luma channel and the first color channel may be computed based on: the luma sample value bitshifted once to divide by two; and the first color channel sample value bitshifted once to divide by two.
[24] The difference between the luma channel and the second color channel may be computed by: computing a sum of the first color channel sample value and the remaining color channel sample value, bitshifting the sum three times to divide by eight, and subtracting from the bitshifted sum the second color channel sample value bitshifted twice to divide by four.
[25] The image may comprise a dominant color, and the first and second color channels may correspond to colors other than the dominant color.
[26] The color image sensor may be an RGB sensor, and the first color channel may be a green color channel and the second color channel may be a blue color channel.
[27] The apparatus may further comprise a subsampler configured to subsample the first chroma channel relative to the luma channel prior to generating the plurality of predicted sample values.
[28] The subsampler may be configured to subsample the second chroma channel relative to the luma channel prior to generating the plurality of predicted sample values.
[29] The apparatus may further comprise a clipper configured to clip at least one portion of the image prior to generating the plurality of predicted sample values.
[30] The lossless predictive coding mode may be a JPEG lossless predictive coding mode. The JPEG lossless predictive coding mode may be left pixel prediction.
[31] The plurality of difference values may be variable length coded using Golomb-Rice coding.
[32] In another broad aspect, there is provided an apparatus for generating an endoscopic image for wireless transmission, the apparatus comprising: at least one light source configured to illuminate a diagnostic area; a color image sensor ¨4¨

=
configured to capture an image of the diagnostic area under illumination, the image comprising a plurality of samples in a plurality of color channels; and the image processing apparatus as described herein, configured to generate the endoscopic image for wireless transmission.
[33] The at least one light source may comprise a wide spectrum light source or at least one narrow band light source.
[34] The at least one light source may comprise a wide spectrum light source and at least one narrow band light source, and the apparatus may comprise a switch for selecting between the wide spectrum light source and the at least one narrow band light source.
[35] The at least one light source may comprise a wide spectrum light source, and the apparatus may further comprise at least one narrow band color filter.
[36] The at least one narrow band light source may comprise a green light source or a blue light source.
Brief Description of the Drawings
[37] A preferred embodiment of the present invention will now be described in detail with reference to the drawings, in which:
FIG. 1 illustrates an exemplary WOE system in accordance with at least some embodiments;
FIGS. 2A and 2B are 3D plots of RGB component values (i.e., red, green and blue) for the pixel positions of an image of both WBI and NBI endoscopic images;
FIGS. 20 and 2D illustrate corresponding 3D plots of the same image of FIGS. 2A and 2B, with the equivalent YEF component values;
FIGS. 3A and 3B are histograms for a WBI and an NBI endoscopic image, respectively, following conversion into the YEF color space;
FIG. 4 illustrates a YEF812 subsampling scheme;
FIG. 5 illustrates the change in dX for an endoscopic image, in all three YEF
components;
FIG. 6 is a simplified block diagram of an exemplary DPCM encoder;
FIGS. 7A and 7B are histograms of dY, dE and dF of exemplary WBI and NBI
endoscopic images, respectively;
FIG. 8 is a plot showing the length of Golomb-Rice codes for various integer values;
¨5¨

. , .
FIG. 9 illustrates an example of an image with corner clipping applied;
FIG. 10 is a simplified block diagram of an exemplary image processor in accordance with at least some embodiments;
FIG. 11 is a flow diagram for an exemplary method of processing an image captured using a color image sensor;
FIG. 12 is a flow diagram for an exemplary method of generating an endoscopic image for wireless transmission; and FIG. 13 is a flow diagram for another exemplary method of generating an endoscopic image for wireless transmission.
Description of Exemplary Embodiments
[38] It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements or steps. In addition, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail since these are known to those skilled in the art. Furthermore, it should be noted that this description is not intended to limit the scope of the embodiments described herein, but rather as merely describing one or more exemplary implementations.
[39] Recently, wireless capsule endoscopy (WOE) has been developed to capture images of the gastrointestinal tract for medical diagnostic purposes. In WOE, a capsule comprising an imaging device and a wireless transmitter is swallowed or ingested by the patient, whereupon the capsule transmits images periodically as the capsule passes through the patient's gastrointestinal tract. The transmitted images may be captured by an external device, for review by a medical practitioner.
WOE is generally more comfortable for patients than wired endoscopy. Moreover, in contrast to wired endoscopy, a complete examination of the entire GI tract, including the small intestine, can be performed using this technique, which may be more difficult or even impossible with wired endoscopy techniques. WOE can be used to detect many diseases of the GI tract, such as bleeding, lesions, ulcers and tumours.
¨6¨

=
. . .
[40] Referring now to FIG. 1, there is illustrated an exemplary WCE system.
WCE
system 100 comprises a WCE capsule 110, which itself generally comprises a digital image sensor 120 (optionally provided with a lens 115), a light source 125 for illuminating the GI tract so that image sensor 120 can capture images, an image processor 130, a battery 145, a wireless transmitter or transceiver 135 and an antenna 140. Transceiver 135 can be used to communicate with a transceiver 150 located external to the patient. Signals received by transceiver 150 can be communicated to and stored at a workstation 160, which may be a personal computer, for example.
[41] As the WCE capsule must be swallowed by the patient, each of the elements of WCE capsule 110 must be limited in size to enable the WCE capsule to be compact. In particular, battery 145 must be kept small. Generally, battery 145 is operable to supply power to WCE capsule 110 for approximately 8 to 10 hours, in some cases longer, which can be sufficient for the WCE capsule 110 to traverse the GI tract of a patient. The size constraint of battery 145 and the need to supply power for the entirety of the WCE capsule's passage through the GI tract imposes a trade-off. In particular, the image processing and wireless transmission performed by WCE
capsule 110 must be size and energy efficient, while maintaining sufficient image quality to enable accurate medical diagnosis.
[42] Light source 125 can be a light emitting diode (LED), for example. When capturing endoscopic images, several changeable light modes can be used, such as white-band imaging (WBI), narrow-band imaging (NBI), and, in some cases, auto-fluorescence imaging (AFI). In WBI, broad spectrum light (e.g., white light) is used to illuminate the GI surface. In NBI, two or more discrete bands of light can be used.
For example, in GI imaging, one blue and one green wavelength of light can be used (e.g., with center wavelengths at 415 nm and at 540 nm). Narrow band blue light can be suitable for displaying superficial capillary networks, while narrow band green light can be suitable for displaying subepithelial vessels. When the two are combined, a high contrast image of the tissue surface can be produced.
[43] In some cases, light source 125 may comprise a plurality of light sources, which can be controlled remotely (e.g., from workstation 160) to switch between narrow and wide band imaging. For example, a remotely controlled switch may be provided to selectively switch a WBI light source and a NBI light source on and off, as needed.
¨7--. . .
[44] In some other cases, a WBI light source can be used, and filters can be positioned in front of image sensor 120 to effect NBI lighting conditions. For example, the filters may be mechanically moved in and out of position using actuators provided at the WOE capsule (not shown). Alternatively, different color filters may cover different portions of image sensor 120, which may be activated or read from as needed.
[45] To enable remote switching between WBI and NBI modes, transceiver 140 and transceiver 150 may be configured to provide duplex communication.
[46] A WOE capsule generally traverses the GI tract via peristaltic contraction.
Accordingly, images of clinically relevant and important tissues may be missed as the capsule is propelled over them. However, one or more techniques can be employed to ensure that relevant tissues are not missed. For example, a high sample rate (e.g., number of images per second) can be maintained, multiple imaging sensors can be used (e.g., oriented in different directions), and motility devices (e.g., miniature robotic arms) can be used. Each of these approaches increases the size and power requirements for the WCE capsule. Accordingly, the importance of efficient image processing and wireless transmission may be further increased.
[47] Described herein are methods and apparatus for image processing suitable for use in a WOE capsule that may support one or both of the WBI and NBI
modes.
[48] The image processing methods and apparatus can be interfaced directly with commercially-available RGB image sensors that support the digital video port (DVP) interface, without the need for an intermediate buffer memory to fill blocks or frames (DVP devices generally output pixels in a raster scan fashion). The described image processing methods and apparatus can also be applied in other applications.
[49] In contrast, conventional image processing and image compression techniques generally work on a block-by-block basis. For example, in image compression based on the Discrete Cosine Transform (DOT), 4x4 or 8x8 pixel blocks need to be accessed from the image sensor. Since commercial CMOS image sensors send pixels in a row-by-row fashion (and do not provide their own buffer memory), buffer memory needs to be provided to support such DOT-based algorithms.
[50] For example, in order to start processing of the first 8x8 block of a 256x256 size image, the compressor must wait until the first 8x8 block is available, which ¨8¨

=
. . .
means that seven full rows of the image, plus the first eight pixels of the eighth row must be received and stored (256 x 7 + 8 = 1800 pixels, assuming progressive scan). Hence, a 5.3kB buffer memory may seem enough (assuming 24 bits per pixel for a color image). However, without a full size buffer memory (i.e., 192kB to store the entirety of the 256x256 image), the image sensor output would need to be stopped (or paused) until the stored pixels are processed, otherwise no additional memory would be available to store new pixels. Alternatively, two parallel buffer memories of size 5.3kB can be used, so that while the compressor works with pixels of one buffer, the new pixels continuously received from the image sensor can be stored in the other buffer. However, such an approach introduces timing challenges, juggling between the compression and input. Moreover, buffer memory can occupy significant area and consume power.
[51] Beyond memory, the computational cost associated in such DCT-based algorithms, which involve multiplication, addition, data scheduling and other operations, can result in high area and power consumption.
[52] Other compression algorithms such as LZW may require content addressable memory (CAM) to operate, as well as memory to store a coder dictionary.
[53] The described methods and apparatus overcome many of the disadvantages of conventional image processing techniques when applied to wireless capsule endoscopy.
[54] The described image processing is low complexity and thus consumes low power, conserving limited battery life while it travels through the GI tract.
This enables the image resolution and frame rate (in frames per second or FPS) of the image sensor to be increased, for example. Other features, such as multi-camera imaging can also be enabled using the conserved power. Both WBI and NBI are supported, and the resulting processed images are sufficiently compressed to fit within the limited bandwidth of a medically implantable wireless transceiver.
For example, the Zarlink ZL70081 is one wireless transmitter compatible with medical implant communication service (MICS), and supports a maximum data rate of 2.7 Mbps. The described image processing techniques can produce a target frame rate of at least 2-5 FPS.
[55] Quality of the reconstructed image can be ensured, as the described image processing results in reconstructed images with a minimum peak signal-to-noise ratio (PSNR) of at least 35 dB. Similar satisfactory results occur with other evaluation ¨9¨

criteria, such as structural similarity index (SSIM), visual information fidelity (VIF), and visual signal-to-noise ratio (VSNR).
[56] Analysis of WBI and NBI images reveals endoscopic images generally exhibit dominance in red color components, and a relatively lower amount of green and blue components. Accordingly, a color conversion can be employed to take advantage of this characteristic by consolidating image information common to all three color channels in a single color channel.
[57] In conventional image processing, luminance-chrominance (or luma-chroma) color spaces are used to take advantage of the human visual acuity system's sensitivity to changes in brightness and relative lack of sensitivity to changes in color.
[58] In particular, due to the presence of relatively more rod cells (which are sensitive to brightness) than cone cells (sensitive to color), the human eye is more sensitive to small changes in brightness than to changes in color. Thus, the loss of information relating to changes in color (e.g., chrominance components) can be tolerated without dramatically degrading image quality.
[59] One example of such a color space is the YUV color space, in which Y
represents luminance and two chrominance components are represented by U and V. U represents the difference between the blue channel and luminance, while V
represents the difference between the red channel and luminance.
[60] However, conversion from the RGB color space, typically output by image sensors, and the YUV color space generally requires multiplication and the use of constants that are not powers of two. The use of such constants, and the resulting multiplication, necessitates the implementation of additional processing hardware, which in turn requires area and power.
[61] Described herein is a color space conversion that does not rely on arbitrary constants or multiplication operations.
[62] The described color space can be defined as YEF, in which a luminance component (i.e., luma) is represented by Y, a first chrominance component (chroma) is represented by E, and a second chrominance component is represented by F.
[63] Generally, E may represent the difference between luminance and the green component (in short, chroma-E), and F may primarily represent the difference between luminance and the blue component (in short, chroma-F). These relationships are shown in Equations (1), (2) and (3).
¨ 10 ¨

, . . .
R G B
Y=++-(1) Y G R G B
E =---+ 128 =---+-+ 128 (2) (3)
[64] From Equations (1), (2), and (3), it can be seen that the conversion between RGB and YEF color spaces involves only a few additions, negations and shift operations using the RGB components.
[65] Referring now to FIGS. 2A and 2B, there are illustrated 3D plots of RGB
component values (i.e., red, green and blue) for the pixel positions of an image of both WBI and NBI endoscopic images. The x- and y-axes correspond to row and column of each pixel, while the z-axis corresponds to the pixel value for each color.
[66] It can be observed that, in the RGB color space, the changes in pixel values are high, and there is comparable information content in all three color components.
[67] Referring now to FIGS. 2C and 2D, there are illustrated corresponding 3D
plots of the same image of FIGS. 2A and 2B, with the equivalent YEF component values.
[68] It can be observed that, in the YEF color space, there is less change in pixel values in the two chrominance components (E and F), which indicates that the E
and F components contain less information.
[69] Through experimentation, it has been determined that the intensity distribution of the green component in RGB endoscopic images is very similar to that of the blue component (as can be seen in FIGS. 2A and 2B). It has been further determined that the intensity distribution of luminance (Y) is similar to the intensity distribution of green and blue RGB components. Accordingly, subtracting green and blue components from the luminance component can produce differential pixel values that are generally small and exhibit little entropy.
Color space WBI StdDev NBI StdDev WBI Entropy NBI Entropy RGB R 46.6 44.1 7.1 7.2 -G 39.4 39.6 7.0 7.0 B 34.7 36.1 6.7 6.9 -YUV Y 34.3 34.6 6.8 6.8 U 7.0 3.0 4.4 3.2 _ / 9.6 5.6 4.9 4.1 YCoCg Y 38.8 39.5 7.0 7.0 Co 13.9 7.0 5.5 4.5 Cg 5.3 3.3 4.1 3.6 YEF Y 38.8 39.5 TO 7.0 2.7 1.7 3.2 2.6 4.7 2.1 3.9 2.8 Table 1
[70] Table 1 illustrates the average standard deviation and entropy for each of the components of several color spaces, for a plurality of both WBI and NBI
exemplary endoscopic images. It can be observed that the YEF color space has the lowest standard deviation and entropy in its chroma components. The low entropy suggests the YEF color space should be subject to good compression.
[71] Referring now to FIGS. 3A and 3B, there are illustrated the histograms for a WBI and an NBI endoscopic image, respectively, following conversion into the YEF
color space.
[72] It can be observed that the variations of the E and F are relatively narrow, due in part to the color homogeneity of the endoscopic images. Accordingly, in some embodiments, subsampling of the chroma components can be employed to reduce the amount of information that is to be compressed and transmitted.
[73] Subsampling can be performed, for example, by selecting one out of every nth pixel to be encoded. For example, in one subsampling scheme, defined as YEF811, for every eight Y component samples, one E component sample and one F
component sample are used. In another example scheme, defined as YEF812, for every eight Y component samples, one E component sample and two F component samples are used. The YEF812 subsampling scheme is illustrated in FIG. 4.
[74] The effect of various subsampling schemes on the quality of exemplary reconstructed WBI endoscopic images, as determined by various quality metrics, is shown in Table 2.
Luminance Chroma SSIM VIF PSNR VSNR PSNR(E) PSNR(F) YEF888 0.999 0.988 57.3 58.0 60.2 60.5 YEF422 0.998 0.986 57.0 55.5 57.8 58.9 YEF412 0.998 0.984 56.5 54.7 54.6 58.8 YEF814 0.998 0.977 55.3 52.5 51.0 58.3 YEF822 0.998 0.978 55.9 53.2 54.6 55.0 YEF812 0.998 0.972 54.7 51.4 51.0 54.8 YEF811 0.998 0.957 53.6 46.1 51.0 50.4 YEF16.1.2 0.997 0.952 52.3 43.9 47.9 50.4 Table 2
[75] The effect of various subsampling schemes on the quality of exemplary reconstructed NBI endoscopic images, as determined by various quality metrics, is shown in Table 3.
Luminance Chroma SSIM VIF PSNR VSNR PSNR(E) PSNR(F) YEF888 0.998 0.989 57.1 69.2 60.7 60.0 YEF422 0.998 0.989 57.1 66.2 58.3 59.2 YEF412 0.998 0.988 57.0 66.2 55.1 59.2 YEF814 0.998 0.987 56.9 66.2 51.8 59.1 YEF822 0.998 0.986 56.7 66.4 55.1 56.7 YEF812 0.998 0.985 56.8 66.5 51.8 56.7 YEF811 0.998 0.980 56.5 59.4 51.8 53.5 YEF16.1.2 0.998 0.978 56.5 59.3 49.4 53.5 YEF16.1.1 0.998 0.973 56.2 52.1 49.4 50.9 Table 3
[76] It can be seen from Table 2 and Table 3 that the YEF888 scheme yields the best performance, which is to be expected as no sub-sampling is performed. For WBI images, the YEF16.1.2 scheme (in which, for every 16 Y components, one E
and two F components are taken) produces the poorest result, due to heavier sub-sampling. In the case of NBI images, it is YEF16.1.1 that exhibits the poorest results in the above table.
[77] In the examples above, the YEF814, YEF822, and YEF812 schemes may offer a suitable trade-off between compression ratio and image quality for the WBI
mode, for example.
[78] A further characteristic of endoscopic images is that changes between adjacent pixel values are generally small. In general, component values tend to change gradually and slowly, as sharp edges are rare in endoscopic images. The change in component values (dX) with respect to its adjacent left pixel in any row can be expressed by Equation (4):
dXr,c = Xr,c - Xnc_i (4) where, X,,, is the pixel value at row r and column c, and Xrc_, is its adjacent left pixel value. In this example, X can refer to Y, E, or F component values.
[79] Referring now to FIG. 5, there is illustrated the change in dX for an endoscopic image, in all three YEF components.
[80] As noted above, the difference in pixel (dX) with respect to the adjacent left pixel is found to be small in endoscopic images. As a result, a form of differential pulse code modulation (DPCM) may be used in some embodiments to encode the . . .
pixel values efficiently. DPCM is a lossless encoding scheme with little computational complexity.
[81] Referring now to FIG. 6, there is illustrated a block diagram of an exemplary DPCM encoder. DPCM encoder 600 comprises a prediction module 605, an addition/subtraction module 610 and a symbol encoder 615.
[82] Prediction module 605 algorithmically selects a predicted next pixel value based on an input pixel (and, optionally, one or more previous input pixels).
The predicted pixel value is subtracted from the input pixel value at 610 and the difference dX is encoded for transmission by symbol encoder 615.
[83] One form of DPCM that may be used in some embodiments is a JPEG
lossless prediction mode. In particular, JPEG lossless prediction mode-1 (e.g., left pixel prediction) can be used, as it can be efficiently implemented in hardware and is suitable for processing raster scanned pixels without the requirement for a buffer memory.
[84] Further compression of DPCM-encoded data can be performed through the use of a suitable variable-length encoding scheme. Such variable-length encoding may also incorporate error correction or detection aspects.
[85] Referring now to FIGS. 7A and 7B, there are illustrated histograms of dY, dE
and dF of exemplary WBI and NBI endoscopic images, respectively. The histograms demonstrate what is generally a two-sided geometric distribution.
[86] For geometric distributions, Golomb coding can be used to provide an optimum code length.
[87] For the purposes of hardware efficiency and ease of implementation, Golomb-Rice coding may be used as an alternative to Golomb coding, as the former exhibits similar compression efficiency to the latter.
[88] Golomb-Rice coding uses positive integers, however dX can be positive or negative. Accordingly, the values of dX can be mapped to m_dX using Equation (5):
1 2dX, when dX 0 1 m_dX =
t2IdX1¨ 1, when dX < 01 (5)
[89] Experimental verification indicates that the values of dY generally fall in the range between +127 and -128, in part due to the absence of sharp transitions between two consecutive pixels in endoscopic images. Moreover, dE and dF
values generally fall within an even narrower range. Accordingly, integers in m_dX
can be ¨14¨

=
mapped to the range between 0 and 255, which can be expressed in binary form using eight bits.
[90] An optimized Golomb-Rice coding scheme can be defined as follows:
= 28 = 256 (6) M = 2k (7)
[91] Where M is a predefined integer and a power of 2, then m_dX can divided by M as follows:
q = Integer (mm_dX) (8) r = m_dx mod M
(9)
[92] The quotient q can be expressed in unary in q+1 bits. The remainder r can be concated with the unary code, and r expressed in binary in k bits.
[93] Generally, length of the Golomb-Rice code can be limited by using a parameter gfimit. In particular, if:
q gumit ¨ log2 I ¨ 1 (10) then the unary code of gimut¨ log2 / ¨ 1 can be prepared. This can be used as an "escape code" for the decoder, and can be followed by the binary representation of m_dX in log2 / bits.
[94] The length of a golomb-rice code can be calculated using (11) and (12):
I = giimit ¨ log2 / ¨ 1 (11) grien =fq + 1 + k, when q <
(12) t gut, when q j
[95] The maximum length of the Golomb-Rice code (giimit) can be selected to be, for example, 32.
[96] Referring now to FIG. 8, there is illustrated a plot showing the length of Golomb-Rice codes for various integer values, as a function of k. From the histograms of FIGS. 7A and 7B, it can be observed that the most frequently occurring value of dE and dF is 0, followed by other values close to zero.
Generally, smaller length codes for zero and near-zero values can be assigned to facilitate good compression. Accordingly, k = 1 can be selected for encoding the mapped integers for dE and dF.
[97] Since dY generally spans a wider range of values in NBI images than in WBI
images (e.g., due to the presence of sharper edges), the k parameter can be ¨ 15 ¨

. . =
selected differently depending on use case. Exemplary k parameters are illustrated in Table 4.
m_dY m_dE m_dF

Table 4
[98] Generally, in wireless endoscopy, the image sensor may be encased in a capsule-shaped tube. Due to the generally rounded shape of the capsule, corner areas in a captured image may be distorted (e.g., stretched) and thus less reliable for diagnostic purposes. When such areas can be safely disregarded, additional compression can be gained by clipping the distorted areas.
[99] From the implementation point of view, it is generally easier to clip four corners of an image along a straight diagonal line, rather than applying a radial cut.
The diagonal line can be chosen to encompass the radial line that might otherwise be desired, and some small additional number of pixels. Clipping is not limited to one or more corners. In other embodiments, other areas of the image may be discarded (e.g., center portion of the image) according to various geometries.
[100] However, if corner clipping is applied, once the horizontal and vertical length of the corner cut is determined, a clipping technique can be implemented with relatively few combinational logic blocks.
[101] For example, column and row pixel positions can be identified to determine whether they fall within a desired viewing region. If the pixel is within the viewing region, it can be sampled, otherwise, the pixel may be ignored or discard (e.g., not sampled).
[102] FIG. 9 illustrates an example of an image with corner clipping applied, in which the areas shaded in black represent clipped pixels.
[103] Once L is determined as shown in Fig. 9, it is simple to implement the clipping algorithm in hardware with just a few combinational logic blocks. As seen from the pseudo code below, the column and row pixel positions are checked to see whether they fall into the desired visual region; if the position is inside, it is sampled; if not, the pixel is ignored (i.e., not sampled).
Is inside visual area := False _ _ _ ¨ 16 ¨

= . .
If ( cY < L ) {
If cX >=(L-cY) And cX <(W-(L-cY)) Is inside visual area := True 1 ¨ ¨
Else if cY >= (W - L) {
If cX >=((cY-(W-L))+1) And cX<(W-((cY-(W-L))+1)) Is inside visual area := True 1 ¨ ¨
Else Is inside visual area := True _ _ _
[104] Referring now to FIG. 10, there is illustrated a simplified block diagram of an exemplary image processor in accordance with at least some embodiments.
[105] Image processor 1000 generally comprises a color conversion module 1020, a clipper 1030, a subsampler 1040, a predictive encoder 1050 and a variable length coder 1060. Image processor 1000 may be implemented as a hardware processor, such as an application specific integrated circuit (ASIC), in a field programmable gate array (FPGA) device, or the like. In some cases, image processor 1000 may be implemented as software instructions stored on a non-transitory computer readable medium, wherein the software instructions, when executed by a general purpose processor, cause the processor to perform the described functions. In some embodiments, portions of image processor 1000 may be implemented in software and other portions implemented in hardware.
[106] Color conversion module 1020 may receive image data, for example from an image sensor of a WCE capsule, and convert from a first color space to a second color space. For example, color conversion module 1020 may convert RGB pixel data to the YEF color space, according to Equations (1), (2) and (3).
Accordingly, color conversion module 1020 may be configured to generate a luma channel based on the plurality of RGB color channels, generate a first chroma channel based on a difference between the luma channel and a first color channel in the plurality of color channels (e.g., green component), and generate a second chroma channel based on a difference between the luma channel and a second color channel in the plurality of color channels (e.g., blue).
¨ 17 ¨

. . .
[107] Each sample value of the luma channel may be generated using addition and shift operations. Similarly, each sample value of the chroma channels may be generated using addition, negation and shift operations, as described herein.
[108] Moreover, each sample value of the luma channel may be generated based on a summation of corresponding sample values in the plurality of color channels, as described herein. The summation may be of: a first color channel sample value bitshifted once to divide by two; a second color channel sample value bitshifted twice to divide by four; and a remaining color channel sample value bitshifted twice to divide by four.
[109] Correspondingly, the difference between the luma channel and the first color channel may be computed based on: the luma sample value bitshifted once to divide by two; and the first color channel sample value bitshifted once to divide by two.
Similarly, the difference between the luma channel and the second color channel may be computed by: computing a sum of the first color channel sample value and the remaining color channel sample value, bitshifting the sum three times to divide by eight, and subtracting from the bitshifted sum the second color channel sample value bitshifted twice to divide by four.
[110] The first and second color channels can be selected to correspond to colors other than a dominant color in the image (e.g., red, for endoscopic images).
[111] Optionally, YEF image data may be provided to clipper 1030, which may clip at least one portion of the image according to a clipping algorithm, as described herein. In some embodiments, clipping may be performed on RGB image data, prior to conversion into the YEF color space.
[112] Optionally, the YEF image data may be provided to subsampler 1040, which may subsample YEF image data according to one or more selected subsannpling schemes. For example, subsannpler 1040 may be configured to subsample the YEF
image data according to a YEF812 scheme. Accordingly, subsannpler 1040 may subsample the first chroma channel relative to the luma channel, or subsample the second chroma channel relative to the luma channel.
[113] YEF image data may further be provided to predictive encoder 1050, which may be a DPCM encoder, a JPEG lossless predictive coder. In some cases, predictive encoder 1050 may employ left pixel prediction based on JPEG
lossless predictive coding mode-1.
¨ 18 ¨

' . . .
[114] Image data encoded by predictive encoder 1050 may further be provided to variable length coder 1010, which may be a Golomb-Rice encoder, for example, as described herein.
[115] Following variable length coding, the image data may be output to produce a processed image, e.g., for transmission by a WCE capsule transceiver.
[116] Some components have been omitted so as not to obscure description of the exemplary embodiments. For example, a parallel to serial converter (P2S) may be provided at the output of image processor 1000, to format output data into serial data suitable for wireless transmission.
[117] Experimental implementation and verification of image processor 1000 indicates that an average compression ratio (using a YEF812 sub-sampling scheme) is 80.4% for a WBI image and 79.2% for a NBI image, with an average PSNR of 43.7dB. Accordingly, resultant images of a QVGA resolution can be transmitted at a frame rate of 5 FPS in the experimental implementation.
[118] Referring now to FIG. 11, there is illustrated a flow diagram for an exemplary method of processing an image captured using a color image sensor, in accordance with at least some embodiments.
[119] Method 1100 begins with receiving image data from an image sensor at 1105.
In some cases, the image sensor may be an RGB image sensor and the image data may be RGB image data.
[120] Optionally, at 1110, the image data may be clipped, for example by clipper 1030. Alternatively, image data may be clipped following conversion to the YEF
color space.
[121] At 1115, a luma channel in the YEF color space may be generated from the RGB image data, for example by color conversion module 1020, as described herein.
[122] At 1120, a first chroma channel may be generated from the RGB image data, for example by color conversion module 1020, as described herein. Similarly, a second chroma channel may be generated from the RGB image data at 1125.
[123] Optionally, the first chroma channel image data may be subsampled at 1130, for example by subsampler 1040, as described herein. Similarly, the second chroma channel image data optionally may be subsampled at 1135.
¨ 19 ¨
[124] At 1140, a plurality of difference values between the plurality of predicted sample values and the respective generated sample values may be computed, for example by predictive encoder 1050, as described herein.
[125] At 1150, the plurality of difference values may be variable length encoded, for example by variable length coder 1060, as described herein.
[126] Image data may be output at 1160 to be transmitted wirelessly, for example.
[127] Referring now to FIG. 12, there is illustrated a flow diagram for an exemplary method of generating an endoscopic image for wireless transmission, in accordance with at least some embodiments.
[128] Method 1200 begins at 1210, by illuminating a diagnostic area, for example a GI tract of a patient, using a light source, such as light source 125. In particular, the light source may be a wide spectrum light source, producing white light.
[129] At 1220, an image of the diagnostic area is captured using an image sensor, such as image sensor 120.
[130] Upon capturing the image, method 1200 may process the image, for example by proceeding to 1105 of method 1100.
[131] Referring now to FIG. 13, there is illustrated a flow diagram for another exemplary method of generating an endoscopic image for wireless transmission, in accordance with at least some embodiments.
[132] Method 1300 begins at 1310, by illuminating a diagnostic area, for example a GI tract of a patient, using a light source, such as light source 125. In particular, the light source may comprise at least one narrow band light source, as described herein. The narrow band light source may, for example, produce green or blue light.
[133] At 1320, an image of the diagnostic area is captured using an image sensor, such as image sensor 120.
[134] Upon capturing the image, method 1300 may process the image, for example by proceeding to 1105 of method 1100.
[135] The present invention has been described here by way of example only, while numerous specific details are set forth herein in order to provide a thorough understanding of the exemplary embodiments described herein. However, it will be understood by those of ordinary skill in the art that these embodiments may, in some cases, be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the description of the embodiments. Various modification and variations ¨ 20 ¨

may be made to these exemplary embodiments without departing from the spirit and scope of the invention, which is limited only by the appended claims.
¨ 21 ¨

Claims (44)

We claim:
1. A method of processing an image captured using a color image sensor, the image comprising a plurality of samples in a plurality of color channels, the method comprising:
generating a luma channel based on the plurality of color channels;
generating a first chroma channel based on a difference between the luma channel and a first color channel in the plurality of color channels, generating a second chroma channel based on a difference between the luma channel and a second color channel in the plurality of color channels;
generating, using a processor, a plurality of predicted sample values for the luma channel and the first and second chroma channels using a lossless predictive coding mode;
computing a plurality of difference values between the plurality of predicted sample values and the respective generated sample values; and variable length coding the plurality of difference values to produce a processed image.
2. The method of claim 1, wherein each sample value of the luma channel is generatable using addition and shift operations.
3. The method of claim 1 or claim 2, wherein each sample value of the first and second chroma channels is generatable using addition, negation and shift operations.
4. The method of any one of claims 1 to 3, wherein each sample value of the luma channel is generated based on a summation of corresponding sample values in the plurality of color channels.
5. The method of claim 4, wherein the summation is of: a first color channel sample value bitshifted once to divide by two; a second color channel sample value bitshifted twice to divide by four; and a remaining color channel sample value bitshifted twice to divide by four.
6. The method of claim 4 or claim 5, wherein the difference between the luma channel and the first color channel is computed based on: the luma sample value bitshifted once to divide by two; and the first color channel sample value bitshifted once to divide by two.
7. The method of any one of claims 4 to 6, wherein the difference between the luma channel and the second color channel is computed by: computing a sum of the first color channel sample value and the remaining color channel sample value, bitshifting the sum three times to divide by eight, and subtracting from the bitshifted sum the second color channel sample value bitshifted twice to divide by four.
8. The method of any one of claims 1 to 7, wherein the image comprises a dominant color, and wherein the first and second color channels correspond to colors other than the dominant color.
9. The method of any one of claims 1 to 8, wherein the color image sensor is an RGB sensor, and wherein the first color channel is a green color channel and the second color channel is a blue color channel.
10. The method of any one of claims 1 to 9, further comprising subsampling the first chroma channel relative to the luma channel prior to generating the plurality of predicted sample values.
11. The method of any one of claims 1 to 10, further comprising subsampling the second chroma channel relative to the luma channel prior to generating the plurality of predicted sample values.
12. The method of any one of claims 1 to 11, further comprising clipping at least one portion of the image prior to generating the plurality of predicted sample values.
13. The method of any one of claims 1 to 12, wherein the lossless predictive coding mode is a JPEG lossless predictive coding mode.
14. The method of claim 13, wherein the JPEG lossless predictive coding mode is left pixel prediction.
15. The method of any one of claims 1 to 14, wherein the plurality of difference values are variable length coded using Golomb-Rice coding.
16. A method of generating an endoscopic image for wireless transmission, the method comprising:
illuminating a diagnostic area using at least one light source;
capturing, using a color image sensor, an image of the diagnostic area under illumination, the image comprising a plurality of samples in a plurality of color channels; and processing the image according to the method of any one of claims 1 to 15 to produce the endoscopic image for wireless transmission.
17. The method of claim 16, wherein the diagnostic area is illuminated with a wide spectrum light source.
18. The method of claim 16, wherein the diagnostic area is illuminated with at least one narrow band light source.
19. The method of claim 16, wherein the at least one light source comprises a wide spectrum light source and at least one narrow band light source, and further comprising switching between the wide spectrum light source and the at least one narrow band light source.
20. The method of claim 16, wherein the at least one light source comprises a wide spectrum light source, further comprising switching between a wide spectrum imaging mode and a narrow band imaging mode.
21. The method of claim 18 or claim 19, wherein the at least one narrow band light source comprises a green light source.
22. The method of claim 18 or claim 19, wherein the at least one narrow band light source comprises a blue light source.
23. An apparatus for processing an image captured using a color image sensor, the image comprising a plurality of samples in a plurality of color channels, the apparatus comprising:
a color conversion module configured to:
generate a luma channel based on the plurality of color channels;
generate a first chroma channel based on a difference between the luma channel and a first color channel in the plurality of color channels;
and generate a second chroma channel based on a difference between the luma channel and a second color channel in the plurality of color channels;
a predictive encoder configured to:
generate a plurality of predicted sample values for the luma channel and the first and second chroma channels using a lossless predictive coding mode; and compute a plurality of difference values between the plurality of predicted sample values and the respective generated sample values;
and a variable length coder configured to encode the plurality of difference values to produce a processed image.
24. The apparatus of claim 23, wherein each sample value of the luma channel is generatable using addition and shift operations.
25. The apparatus of claim 23 or claim 24, wherein each sample value of the first and second chroma channels is generatable using addition, negation and shift operations.
26. The apparatus of any one of claims 23 to 25, wherein each sample value of the luma channel is generated based on a summation of corresponding sample values in the plurality of color channels.
27. The apparatus of claim 26, wherein the summation is of: a first color channel sample value bitshifted once to divide by two; a second color channel sample value bitshifted twice to divide by four; and a remaining color channel sample value bitshifted twice to divide by four.
28. The apparatus of claim 26 or claim 27, wherein the difference between the luma channel and the first color channel is computed based on: the luma sample value bitshifted once to divide by two; and the first color channel sample value bitshifted once to divide by two.
29. The apparatus of any one of claims 26 to 28, wherein the difference between the luma channel and the second color channel is computed by: computing a sum of the first color channel sample value and the remaining color channel sample value, bitshifting the sum three times to divide by eight, and subtracting from the bitshifted sum the second color channel sample value bitshifted twice to divide by four.
30. The apparatus of any one of claims 23 to 29, wherein the image comprises a dominant color, and wherein the first and second color channels correspond to colors other than the dominant color.
31. The apparatus of any one of claims 23 to 30, wherein the color image sensor is an RGB sensor, and wherein the first color channel is a green color channel and the second color channel is a blue color channel.
32. The apparatus of any one of claims 23 to 31, further comprising a subsampler configured to subsample the first chroma channel relative to the luma channel prior to generating the plurality of predicted sample values.
33. The apparatus of any one of claims 23 to 32, further comprising subsampler configured to subsample the second chroma channel relative to the luma channel prior to generating the plurality of predicted sample values.
34. The apparatus of any one of claims 23to 33, further comprising a clipper configured to clip at least one portion of the image prior to generating the plurality of predicted sample values.
35. The apparatus of any one of claims 23 to 34, wherein the lossless predictive coding mode is a JPEG lossless predictive coding mode.
36. The apparatus of claim 35, wherein the JPEG lossless predictive coding mode is left pixel prediction.
37. The apparatus of any one of claims 23 to 36, wherein the plurality of difference values are variable length coded using Golomb-Rice coding.
38. An apparatus for generating an endoscopic image for wireless transmission, the apparatus comprising:
at least one light source configured to illuminate a diagnostic area;
a color image sensor configured to capture an image of the diagnostic area under illumination, the image comprising a plurality of samples in a plurality of color channels; and the apparatus according to the apparatus of any one of claims 23 to 37, configured to generate the endoscopic image for wireless transmission.
39. The apparatus of claim 38, wherein the at least one light source comprises a wide spectrum light source.
40. The apparatus of claim 38, wherein the at least one light source comprises at least one narrow band light source.
41. The apparatus of claim 38, wherein the at least one light source comprises a wide spectrum light source and at least one narrow band light source, and further comprising a switch for selecting between the wide spectrum light source and the at least one narrow band light source.
42. The apparatus of claim 38, wherein the at least one light source comprises a wide spectrum light source, further comprising at least one narrow band color filter.
43. The apparatus of claim 40 or claim 41, wherein the at least one narrow band light source comprises a green light source.
44. The apparatus of claim 40 or claim 41, wherein the at least one narrow band light source comprises a blue light source.
CA2773795A 2012-04-11 2012-04-11 Methods and apparatus for image processing in wireless capsule endoscopy Active CA2773795C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA2773795A CA2773795C (en) 2012-04-11 2012-04-11 Methods and apparatus for image processing in wireless capsule endoscopy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA2773795A CA2773795C (en) 2012-04-11 2012-04-11 Methods and apparatus for image processing in wireless capsule endoscopy

Publications (2)

Publication Number Publication Date
CA2773795A1 CA2773795A1 (en) 2013-10-11
CA2773795C true CA2773795C (en) 2018-05-29

Family

ID=49322781

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2773795A Active CA2773795C (en) 2012-04-11 2012-04-11 Methods and apparatus for image processing in wireless capsule endoscopy

Country Status (1)

Country Link
CA (1) CA2773795C (en)

Also Published As

Publication number Publication date
CA2773795A1 (en) 2013-10-11

Similar Documents

Publication Publication Date Title
Khan et al. Low power and low complexity compressor for video capsule endoscopy
US9113846B2 (en) In-vivo imaging device providing data compression
JP5616017B2 (en) In vivo imaging device providing constant bit rate transmission
AU2002321797B2 (en) Diagnostic device using data compression
US20030028078A1 (en) In vivo imaging device, system and method
EP1609310B1 (en) System for reconstructing an image
Khan et al. Lossless and low-power image compressor for wireless capsule endoscopy
WO2006003647A2 (en) Device, system, and method for reducing image data captured in-vivo
Mohammed et al. Lossless compression in Bayer color filter array for capsule endoscopy
US9088716B2 (en) Methods and apparatus for image processing in wireless capsule endoscopy
Khan et al. White and narrow band image compressor based on a new color space for capsule endoscopy
Turcza et al. Low-power image compression for wireless capsule endoscopy
Alam et al. Are current advances of compression algorithms for capsule endoscopy enough? A technical review
Turcza et al. Energy-efficient image compression algorithm for high-frame rate multi-view wireless capsule endoscopy
US8165374B1 (en) System and method for capsule camera with capture control and motion-compensated video compression
Sushma et al. Distributed video coding based on classification of frequency bands with block texture conditioned key frame encoder for wireless capsule endoscopy
Liu et al. A complexity-efficient and one-pass image compression algorithm for wireless capsule endoscopy
CA2773795C (en) Methods and apparatus for image processing in wireless capsule endoscopy
KR101416223B1 (en) Capsule-type endoscope and method of compressing images thereof
Kim et al. Very low complexity low rate image coding for the wireless endoscope
Khan et al. Implantable narrow band image compressor for capsule endoscopy
Fante et al. A low-power color mosaic image compressor based on optimal combination of 1-d discrete wavelet packet transform and dpcm for wireless capsule endoscopy
Sushma Endoscopic Wireless Capsule Compressor: A Review of the Existing Image and Video Compression Algorithms
Paul et al. Robust image compression algorithm for video capsule endoscopy: a review
Sushma et al. Wyner-Ziv coding of chroma in wireless capsule endoscopy image compression using deep side information generation

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20170405