WO2014045525A1 - 画像処理装置、画像処理プログラム及び画像処理方法 - Google Patents
画像処理装置、画像処理プログラム及び画像処理方法 Download PDFInfo
- Publication number
- WO2014045525A1 WO2014045525A1 PCT/JP2013/005097 JP2013005097W WO2014045525A1 WO 2014045525 A1 WO2014045525 A1 WO 2014045525A1 JP 2013005097 W JP2013005097 W JP 2013005097W WO 2014045525 A1 WO2014045525 A1 WO 2014045525A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interpolation
- unit
- light receiving
- pixel values
- image processing
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4015—Demosaicing, e.g. colour filter array [CFA], Bayer pattern
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/0004—Microscopes specially adapted for specific applications
- G02B21/002—Scanning microscopes
- G02B21/0024—Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
- G02B21/008—Details of detection or image processing, including general computer control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
Definitions
- This technology relates to digital imaging that can be used for pathological diagnosis and the like.
- observation target range the range of the sample that requires microscope observation
- the captured images are aligned and combined to generate a microscope captured image of the observation target range.
- the entire specimen is imaged with another magnification, another optical system, or an image sensor prior to imaging with a microscope, and the obtained image (hereinafter referred to as the entire image) is subjected to image processing, so that the observation target is obtained.
- a range is detected.
- Patent Document 1 discloses a “specimen region detection method” in which an observation target range is detected from an image obtained by imaging the entire sample, and the range is microscopically imaged.
- the whole image is also used as a list image (thumbnail image) for an observer to search for a desired microscope image. Furthermore, information (characters, barcodes, etc.) for specifying the sample may be displayed on the sample, and such information is also read from the entire image.
- the whole image is also used as a list image, and the whole image imaging device has a resolution that is not so high, the display may be further reduced, and the list image has the visibility when reduced. is important.
- the observation target range is specified and information is recognized from the image processing result for the entire image, it is not preferable to reduce the accuracy of the image processing with emphasis on visibility.
- an object of the present technology is to provide an image processing apparatus, an image processing program, and an image processing method capable of achieving both the visibility of an entire image used for imaging a microscope image and the image processing accuracy. It is to provide.
- an image processing apparatus includes an output value acquisition unit, a first interpolation unit, a second interpolation unit, an image generation unit, and a first edge detection unit. It comprises.
- the output value acquisition unit acquires the output values of the plurality of light receiving elements from an imaging element having a plurality of light receiving elements arranged two-dimensionally.
- the first interpolation unit interpolates pixel values corresponding to each of the plurality of light receiving elements from the output value using a first interpolation algorithm.
- the second interpolation unit uses a second interpolation algorithm different from the first interpolation algorithm to interpolate pixel values of pixels corresponding to each of the plurality of light receiving elements from the output value.
- the image generation unit generates an image from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the first interpolation unit.
- the first edge detection unit detects an edge from a pixel value of a pixel corresponding to each of the plurality of light receiving elements interpolated by the second interpolation unit, using a first edge detection algorithm.
- the output value acquisition unit acquires output values of a plurality of light receiving elements included in the imaging element.
- the output value of each light receiving element indicates the intensity of light transmitted through each color filter (for example, R, G, B) provided in each light receiving element. Therefore, in order to generate a pixel value (for example, RGB value) corresponding to each light receiving element, it is necessary to interpolate (demosaic) the pixel value corresponding to each light receiving element from the output value of each light receiving element.
- the first interpolation unit uses the first interpolation algorithm
- the second interpolation unit uses the second interpolation algorithm. Then, the pixel values are interpolated.
- an interpolation algorithm suitable for each of the interpolation algorithm for image generation (first interpolation algorithm) and the interpolation algorithm for edge detection (second interpolation algorithm) can be employed. Therefore, according to the above configuration, it is possible to achieve both image visibility and edge detection accuracy.
- the edge detected by edge detection can be used for detection of the microscope observation area and recognition of information displayed on the sample. Therefore, improving the accuracy of edge detection improves the accuracy of these processes. Can be made.
- the first interpolation algorithm is an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is small.
- the second interpolation algorithm may be an interpolation algorithm that interpolates the pixel values such that a difference between pixel values of pixels corresponding to adjacent light receiving elements is larger than that of the first interpolation algorithm.
- the first interpolation algorithm for generating an image is an interpolation algorithm that interpolates pixel values so that the difference between pixel values of pixels corresponding to adjacent light receiving elements is small, so that an inter-pixel interval is generated in the generated image.
- the pixel value difference (contrast) is reduced, that is, it is possible to generate an image with excellent visibility in which the pixel value changes smoothly.
- the second interpolation algorithm for edge detection is an interpolation algorithm for interpolating pixel values so that the difference in pixel values of pixels corresponding to adjacent light receiving elements becomes large, so that edge ( Detection accuracy (detected using the difference between pixel values) can be improved.
- the image processing apparatus may further include an area detection unit that performs area detection from the detection result of the first edge detection unit.
- the first edge detection unit can detect the edge with high accuracy from the pixel value interpolated by the second interpolation unit using the second interpolation algorithm. Therefore, the region detection unit that uses the detection result of the first edge detection unit can perform region detection with high accuracy.
- the image processing apparatus includes: An edge is detected from a pixel value of a pixel corresponding to each of the plurality of light receiving elements interpolated by the second interpolation unit, using a second edge detection algorithm different from the first edge detection algorithm. An information recognition unit that performs information recognition based on a detection result of the second edge detection unit and the second edge detection unit.
- the second edge detection unit can detect an edge with high accuracy from the pixel value interpolated by the second interpolation unit, and uses the second edge detection result.
- the recognition unit can perform information recognition with high accuracy.
- An edge detection algorithm suitable for each process by separating a first edge detection algorithm for detecting an edge used for area detection and a second edge detection algorithm for detecting an edge used for information recognition. Can be used.
- the second interpolation unit may perform YUV conversion together with the interpolation of the pixel values.
- the second interpolation unit can convert the RGB pixel values generated by the interpolation into the YUV format.
- the second interpolation unit performs YUV conversion of the pixel values together with the interpolation of the pixel values (for each pixel) to perform YUV conversion for all the pixels after interpolating the pixel values of all the pixels. In comparison, the processing time can be shortened.
- the first interpolation algorithm may be a linear interpolation method
- the second interpolation algorithm may be a gradient method.
- the linear interpolation method is an interpolation algorithm that interpolates pixel values so that a difference in pixel values of pixels corresponding to adjacent light receiving elements is small, and is suitable as a first interpolation algorithm.
- the gradient method is an interpolation algorithm that interpolates pixel values so that the difference between pixel values of pixels corresponding to adjacent light receiving elements is large, and is suitable as a second interpolation algorithm.
- the region detection unit detects a region including an edge in the detection result by the first edge detection unit as an observation target region in the sample
- the information recognition unit may recognize a character or code displayed on the sample from a detection result by the second edge detection unit.
- the pixel region including the edge detected by the first edge detection unit can be determined as the region where the specimen (pathological tissue or the like) exists in the sample. This is because an edge is not inherently detected in a region where no specimen exists (such as on a slide glass).
- the region detection unit can detect the observation target region (that is, the region where the specimen exists) with high accuracy by using the detection result of the first edge detection unit in which the edge is detected with high accuracy.
- the information recognition unit uses the detection result obtained by the second edge detection unit in which the edge is detected with high accuracy, and provides high accuracy. Characters and codes can be detected and recognized as information.
- the image processing apparatus includes: A third interpolation algorithm that is different from the first interpolation algorithm and the second interpolation algorithm is used to interpolate pixel values of pixels corresponding to the plurality of light receiving elements from the output value. Based on the pixel values of the pixels corresponding to each of the plurality of light receiving elements interpolated by the interpolation unit and the third interpolation unit, a second edge detection algorithm different from the first edge detection algorithm is used. You may further comprise the 2nd edge detection part which detects an edge.
- the third interpolation unit performs pixel value interpolation using the third field interpolation algorithm.
- the second interpolation algorithm in which the first edge detection unit uses the interpolation result (pixel value) and the third interpolation algorithm in which the second edge detection unit uses the interpolation result (pixel value), respectively.
- an interpolation algorithm suitable for the edge detection process Both the first edge detection unit and the second edge detection unit perform edge detection.
- the edge detection result is used for different processing such as region detection and information recognition described later, an edge suitable for each processing.
- the detection algorithm may be different. In such a case, the accuracy of edge detection can be improved by using an interpolation algorithm suitable for edge detection of each edge detection algorithm from the stage of pixel value interpolation.
- the first interpolation algorithm is an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is small.
- the second interpolation algorithm is an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is larger than that of the first interpolation algorithm.
- the third interpolation algorithm may be an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is larger than that of the first interpolation algorithm.
- the first interpolation algorithm for generating an image is an interpolation algorithm that interpolates pixel values so that the difference between pixel values of pixels corresponding to adjacent light receiving elements is small, so that an inter-pixel interval is generated in the generated image. This makes it possible to generate an image with excellent visibility in which the pixel value difference is small, that is, the pixel value changes smoothly.
- the second interpolation algorithm and the third interpolation algorithm for edge detection are made to be interpolation algorithms for interpolating pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements becomes large. Thus, the edge detection accuracy can be improved in the edge detection process.
- the image processing apparatus includes: An area detection unit that performs region detection from the detection result by the first edge detection unit and an information recognition unit that performs information recognition from the detection result by the second edge detection unit may be further included.
- the first edge detection unit can detect the edge with high accuracy from the pixel value interpolated by the second interpolation unit using the second interpolation algorithm. Therefore, the region detection unit that uses the detection result of the first edge detection unit can perform region detection with high accuracy.
- the second edge detection unit can detect the edge with high accuracy from the pixel value interpolated by the third interpolation unit, and the information recognition unit that uses the second edge detection result has high accuracy. Information recognition can be performed.
- the second interpolation unit may perform YUV conversion together with the pixel value interpolation
- the third interpolation unit may perform YUV conversion together with the pixel value interpolation.
- the second interpolation unit and the third interpolation unit perform YUV conversion of pixel values together with pixel value interpolation (for each pixel), so that YUV conversion is performed on all pixels after interpolating the pixel values of all pixels. Compared to the case, the processing time can be shortened.
- the first interpolation algorithm is a linear interpolation method
- the image processing apparatus wherein the second interpolation algorithm is a gradient method
- the third interpolation algorithm is an adaptive color brain interpolation method.
- the linear interpolation method is an interpolation algorithm that interpolates pixel values so that a difference between pixels corresponding to adjacent light receiving elements is small, and is suitable as a first interpolation algorithm.
- the gradient method is an interpolation algorithm that interpolates pixel values so that the difference between pixel values of pixels corresponding to adjacent light receiving elements is large, and is suitable as a second interpolation algorithm.
- the adaptive color brain interpolation method (ACPI: Advanced Color Plane Interpolation method) is an interpolation algorithm that interpolates pixel values so that the difference between pixel values of pixels corresponding to adjacent light receiving elements becomes large. It is suitable as an algorithm.
- the region detection unit detects a region including an edge in the detection result by the first edge detection unit as an observation target region in the sample,
- the information processing unit recognizes a character or code displayed on the sample from a detection result by the second edge detection unit.
- the region detection unit can detect the observation target region (that is, the region where the specimen exists) with high accuracy by using the detection result of the first edge detection unit in which the edge is detected with high accuracy.
- the information recognition unit can detect characters and codes with high accuracy and recognize them as information using the detection result of the second edge detection unit in which the edges are detected with high accuracy.
- an image processing program provides information as an output value acquisition unit, a first interpolation unit, a second interpolation unit, an image generation unit, and an edge detection unit.
- the output value acquisition unit acquires the output values of the plurality of light receiving elements from an imaging element having a plurality of light receiving elements arranged two-dimensionally.
- the first interpolation unit interpolates pixel values corresponding to each of the plurality of light receiving elements from the output value using a first interpolation algorithm.
- the second interpolation unit uses a second interpolation algorithm different from the first interpolation algorithm to interpolate pixel values of pixels corresponding to each of the plurality of light receiving elements from the output value.
- the image generation unit generates an image from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the first interpolation unit.
- the first edge detection unit detects an edge by using a first edge detection algorithm from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the second interpolation unit. .
- an image processing method includes an output value acquisition unit that includes a plurality of light receiving elements arranged in a two-dimensional manner, and each of the light receiving elements. Get the output value.
- the first interpolation unit uses the first interpolation algorithm to interpolate pixel values corresponding to each of the plurality of light receiving elements from the output value.
- the second interpolation unit uses a second interpolation algorithm different from the first interpolation algorithm to interpolate pixel values corresponding to each of the plurality of light receiving elements from the output value.
- An image generation part produces
- An edge detection unit detects an edge from the pixel values of the pixels corresponding to each of the plurality of light receiving elements interpolated by the second interpolation unit using an edge detection algorithm.
- an image processing device As described above, according to the present technology, it is possible to provide an image processing device, an image processing program, and an image processing method capable of achieving both the visibility of an entire image used for capturing a microscopic image and the image processing accuracy. Is possible.
- FIG. 1 is a schematic diagram illustrating a schematic configuration of the microscope imaging apparatus 1.
- the microscope imaging apparatus 1 includes a stage 11, a microscope optical system 12, a microscope imaging element 13, a whole image optical system 14, a whole image imaging element 15, a control device 16, an image processing device 17, and a display 18. It is configured.
- the microscope image pickup device 13, the entire image pickup device 15, and the stage 11 are connected to the control device 16, and the control device 16 is connected to the image processing device 17.
- the display 18 is connected to the image processing device 17, and the sample S is placed on the stage 11.
- the sample S is a preparation or the like to which a specimen (for example, a pathological tissue) to be observed is fixed.
- the stage 11 supports the sample S and moves the sample S with respect to the microscope optical system 12 and the whole image imaging element 15 under the control of the control device 16. Note that the microscope optical system 12 and the entire image optical system 14 may move relative to the stage 11 instead of moving the stage 11.
- the microscope optical system 12 enlarges the image of the sample S and guides it to the microscope image sensor 13.
- the configuration of the microscope optical system 12 is not particularly limited. Further, the microscope optical system 12 may be controlled by the control device 16 such as an enlargement magnification.
- the microscope image sensor 13 captures a partial microscope image (hereinafter referred to as a microscope image) of the sample S magnified by the microscope optical system 12.
- the microscope image pickup device 13 transmits the picked-up microscope image to the image processing device 17 via the control device 16.
- the microscope image pickup device 13 is controlled by the control device 16 such as the image pickup timing.
- the configuration of the microscope image pickup device 13 is not particularly limited, but one having high resolution is suitable.
- the whole image optical system 14 guides the image of the sample S to the whole image imaging element 15.
- the configuration of the whole image optical system 14 is not particularly limited.
- the whole image pickup element 15 picks up a whole image of the sample S (hereinafter, whole image) through the whole image optical system 14.
- the overall image pickup element 15 is an image pickup element having a plurality of light receiving elements (photoelectric conversion elements) such as a CCD (Charge-Coupled Device) image sensor or a CMOS (Complementary Metal-Oxide Semiconductor) image sensor.
- light receiving elements photoelectric conversion elements
- CCD Charge-Coupled Device
- CMOS Complementary Metal-Oxide Semiconductor
- FIG. 2 is a schematic diagram showing the whole image pickup device 15.
- the overall image pickup device 15 has a plurality of light receiving elements 151 arranged two-dimensionally. Note that the number of light receiving elements 151 (number of pixels) is actually about several hundred thousand to several million.
- the overall image pickup device 15 may not be as high in resolution as the microscope image pickup device 13.
- Each light receiving element 151 is provided with a color filter in front of the light receiving surface.
- the color filter is a filter that transmits light of a specific wavelength, and generally three colors of red (R), blue (B), and green (G) are used. As shown in FIG. 2, each light receiving element 151 is provided with any one of red (R), blue (B), and green (G) color filters. It is an image sensor.
- the color filter array shown in FIG. 2 is an array called a Bayer array, and a large amount of green (G) is arranged in accordance with human light receiving sensitivity. Note that the overall image pickup device 15 according to the present embodiment is not limited to one having a Bayer array, and may have another color filter array. The color of the color filter is not limited to the above three colors.
- the output value of each light receiving element 151 is the intensity (light / dark) of the light transmitted through the color filter provided in the light receiving element 151.
- the output value of the light receiving element 151 provided with a red (R) color filter does not include blue (B) and green (G) information. Otherwise, when an image is generated from the output value of each light receiving element 151, the number of pixels becomes smaller than the number of light receiving elements 151 (1/2 or 1/4).
- the pixel values (RGB values) are interpolated using the output values of the surrounding light receiving elements 151. This interpolation will be described later.
- the control device 16 acquires the outputs of the microscope image pickup device 13 and the entire image pickup device 15 and transmits them to the image processing device 17. Further, the control device 16 controls the stage 11 and the microscope image pickup device 13 based on information (region detection result, which will be described later) supplied from the image processing device 17.
- the image processing device 17 acquires the outputs of the microscope image pickup device 13 and the entire image pickup device 15 and executes image generation and image processing. Further, the image processing device 17 displays a microscopic image and an entire image on the display 18. Although the detailed configuration of the image processing device 17 will be described later, the image processing device 17 may be an information processing device such as a personal computer.
- the display 18 displays a microscopic image and an entire image supplied from the image processing device 17.
- the configuration of the display 18 is not particularly limited.
- the microscope imaging apparatus 1 has the above configuration.
- the said structure is an illustration and can also be set as the structure different from what is shown here.
- the microscope imaging apparatus 1 operates as follows. First, the whole image of the sample S is picked up by the whole image pickup element 15, and whole image data is generated. The whole image data is transmitted from the whole image pickup device 15 to the image processing device 17 via the control device 16.
- the image processing device 17 generates an entire image from the entire image data. Further, the image processing device 17 performs image processing to be described later on the entire image data, and detects an area in the sample S where the specimen to be observed exists (hereinafter referred to as a specimen existing area). For example, when the sample S is a sample in which a specimen is fixed on a slide glass, it is not necessary to observe an area on the slide glass where the specimen does not exist. That is, the specimen presence area is an observation target area that requires microscopic imaging in the sample S.
- display information such as characters and codes (barcode etc.)
- the image processing device 17 recognizes these information from the entire image data.
- the characters and codes are a sample number or the like described or pasted on the sample S by a user (an observer, a sample collector, or the like).
- the image processing device 17 supplies the detected specimen presence area to the control device 16.
- the control device 16 controls the stage 11, the microscope optical system 12, and the microscope image pickup device 13 based on the sample existing region supplied from the image processing device 17, and causes the microscope image pickup device 13 to image the sample existing region at a predetermined magnification. . Since the microscope image pickup device 13 has a fine visual field range by the microscope optical system 12, the specimen existing area is divided into a plurality of areas and images are taken for each area. The microscope imaging device 13 transmits a plurality of microscope image data generated by imaging to the image processing device 17 via the control device 16.
- the image processing device 17 combines (stitching) a plurality of microscope image data. Thereby, a microscope image of the specimen existing area is generated.
- the image processing device 17 associates the entire image with the microscope image, and associates and stores display information read from the entire image, if any.
- the image processing device 17 can display the entire image on the display 18 as a list image (thumbnail image), and additionally displays display information on each entire image. You can also.
- the image processing device 17 can display a microscope image associated with the selected sample on the display 18.
- the whole image generated from the whole image data is also used as a list image for the user to search for a desired microscope image.
- the resolution of the whole-image imaging element 15 is low, and the accuracy of extraction of the specimen existing area and recognition of display information becomes a problem. If the resolution of the whole image pickup device 15 is improved, it takes time for these processes, which is not preferable. In particular, in the field of pathological diagnosis and the like, the number of samples S that require microscopic observation is often enormous, and there is a demand for speeding up the imaging process.
- the configuration of the image processing apparatus 17 described below improves the accuracy of detection of the specimen existing area and the recognition of display information, and shortens the time required for these processes. Has been realized.
- FIG. 3 is a block diagram showing a functional configuration of the image processing apparatus 17.
- the image processing device 17 includes an output value acquisition unit 171, a first interpolation unit 172, a second interpolation unit 173, an image generation unit 174, a first edge detection unit 175, a second edge detection unit 176, An area detection unit 177, an information recognition unit 178, a storage unit 179, and an image display unit 180 are included.
- Such a functional configuration of the image processing apparatus 17 is realized by cooperation of hardware such as a CPU (Central Processing Unit) and a memory and software. Further, these configurations are not necessarily realized by a single device, but may be realized by a plurality of devices, or may be realized via a computer network or on a computer network.
- hardware such as a CPU (Central Processing Unit) and a memory and software.
- these configurations are not necessarily realized by a single device, but may be realized by a plurality of devices, or may be realized via a computer network or on a computer network.
- the output value acquisition unit 171 acquires the output value of each light receiving element 151 from the overall image pickup element 15 via the control device 16 (not shown).
- each light receiving element 151 is provided with a color filter of each color (R, G, or B), and the output value of each light receiving element 151 is the intensity (light / dark) of light of a specific color that has passed through the color filter. ).
- the output value of each color that the output value acquisition unit 171 acquires from each light receiving element 151 is indicated by a symbol (R 11 or the like) attached to FIG.
- the output value acquisition unit 171 supplies the acquired output value of each light receiving element 151 to the first interpolation unit 172 and the second interpolation unit 173.
- the first interpolation unit 172 and the second interpolation unit 172 generate (interpolate) pixel values of pixels corresponding to the light receiving elements 151 from the output values of the light receiving elements 151 supplied from the output value acquisition unit 171. .
- FIG. 4 is a schematic diagram showing interpolation of pixel values.
- FIG. 4A shows an output value (G 32 or the like) output from each light receiving element 151.
- the output value of the center pixel is the value for green (the output value of the light receiving element 151 provided with the green color filter), but the output of the light receiving elements 151 around (upper, lower, left and right)
- the value is for red (R) or blue (B).
- the center pixel has its own output value as the green component (G), and the output of the pixels located around the red component (R ′) and the blue component (B ′). You can interpolate using values. Such interpolation of pixel values is called “demosaic”.
- each color component (RGB value) interpolated using its own output value and the output values of the surrounding light receiving elements is set as the pixel value of the pixel corresponding to that light receiving element 151. Note that the pixel value generation method shown here does not indicate a specific interpolation algorithm.
- the first interpolation unit 172 generates the pixel value of the pixel corresponding to each light receiving element 151 from the output value of each light receiving element 151 using the first interpolation algorithm.
- the first interpolation algorithm may be an interpolation algorithm that interpolates the pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is small. Specific examples of such an interpolation algorithm include a “linear interpolation method”.
- the linear interpolation method is an interpolation algorithm that performs interpolation by setting the pixel value of the pixel to be interpolated to the average value of the output values of the surrounding pixels.
- the pixel value of each pixel is smoothed, and the difference in pixel value is smaller than that in a second interpolation algorithm described later.
- an algorithm that reduces the difference between adjacent pixels even if, for example, pixels are thinned out or averaged to be reduced and displayed in a reduced display, there is no extreme change compared to the original image.
- the first interpolation algorithm is not limited to the linear interpolation method, and may be another interpolation algorithm in which the difference in pixel value is smaller than that of the second interpolation algorithm described later.
- a “curved surface fitting method” that can prevent a color shift caused by pixel value interpolation called false color can be used as the first interpolation algorithm.
- the first interpolation unit 172 supplies the pixel values of the pixels corresponding to the interpolated light receiving elements 151 to the image generation unit 174.
- the second interpolation unit 173 generates the pixel value of the pixel corresponding to each light receiving element 151 from the output value of each light receiving element 151 using the second interpolation algorithm.
- the second interpolation algorithm is an interpolation algorithm different from the first interpolation algorithm.
- the second interpolation algorithm is preferably an interpolation algorithm that interpolates pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is larger than that of the first interpolation algorithm.
- a specific example of such an interpolation algorithm is “gradient method”.
- the gradient method is an interpolation algorithm that obtains the green component of each pixel along a direction of strong continuity, and then compares the other color components with the obtained green component.
- the difference in pixel value is larger than in the linear interpolation method (first interpolation algorithm) in which each color component is simply averaged.
- the second interpolation algorithm is not limited to the gradient method, and may be another interpolation algorithm in which the difference in pixel value is larger than that of the first interpolation algorithm.
- the adaptive color brain interpolation method (ACPI method) described in the second embodiment can be used as the second interpolation algorithm.
- the second interpolation unit 173 supplies the pixel values of the pixels corresponding to the interpolated light receiving elements 151 to the first edge detection unit 175 and the second edge detection unit 176.
- the second interpolation unit 173 can execute YUV conversion of pixel values together with interpolation of pixel values using the second interpolation algorithm.
- interpolation of pixel values and “both” YUV conversion means that after generating a pixel value (RGB value) of a pixel corresponding to one light receiving element 151, the pixel value is YUV converted and the next light receiving element This means that pixel value generation and YUV conversion are performed in parallel so that the pixel value of the pixel corresponding to 151 is generated.
- the second interpolation unit 173 performs both the pixel value interpolation and the YUV conversion in this way, so that the YUV conversion is performed on each pixel value after generating the pixel values for all the light receiving elements 151. Processing time can be shortened.
- the image generation unit 174 generates an image (entire image) from the pixel values of the pixels corresponding to the respective light receiving elements 151 generated by the first interpolation unit 172.
- the first interpolation unit 172 uses the first interpolation algorithm that interpolates the pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is small.
- the generated image is an image with excellent visibility in which pixel values change smoothly.
- the overall image pickup device 15 often has a low resolution. In such a case, the visibility of the image is particularly important.
- the image generation unit 174 stores the generated entire image in the storage unit 179.
- the first edge detection unit 175 detects an edge from the pixel value of the pixel corresponding to each light receiving element 151 generated by the second interpolation unit 173.
- An edge is a boundary having a large difference in pixel value between adjacent pixels or a certain range of pixels, and the first edge detection unit 175 detects an edge using a first edge detection algorithm.
- the first edge detection algorithm is not particularly limited, but an edge detection algorithm that allows the area detection unit 177 described later to easily perform area detection is preferable.
- the second interpolation unit 173 a second interpolation algorithm for interpolating the pixel values so as to increase the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is used. Since the edge is easier to detect when the difference between the pixel values is larger, the first edge detection unit 175 can detect the edge with high detection accuracy. The first edge detection unit 175 supplies the edge detection result to the region detection unit 177.
- the second edge detection unit 176 detects an edge from the pixel value of the pixel corresponding to each light receiving element 151 generated by the second interpolation unit 173.
- the second edge detection unit 176 can detect an edge using a second edge detection algorithm different from the first edge detection algorithm.
- the second edge detection algorithm is not particularly limited, but a detection algorithm that allows the information recognition unit 178 described later to easily perform information recognition is preferable.
- the second interpolation unit 173 uses the second interpolation algorithm for interpolating the pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is increased, the second edge is used.
- the detection unit 176 can detect an edge with high detection accuracy.
- the second edge detection unit 176 supplies the edge detection result to the information recognition unit 178.
- the region detection unit 177 performs region detection from the edge detection result of the first edge detection unit 175.
- the area detection is a process for detecting an area (sample presence area) in which the sample to be observed exists in the sample S described above.
- the region detection unit 177 can set the region where the edge detected by the first edge detection unit 175 is present as the specimen presence region.
- the presence or absence of the sample can be determined based on the presence or absence of the edge.
- the region detection unit 177 that uses the edge detects the edge. Region detection can also be performed with high accuracy.
- the region detection unit 177 supplies the detected specimen presence region to the control device 16.
- the information recognition unit 178 performs information recognition from the edge detection result of the second edge detection unit 176.
- information recognition display information displayed on the sample S, for example, a character or a code (bar code or the like) is detected and recognized as information.
- the information recognition unit 178 can execute processing such as contour extraction from the edge detected by the second edge detection unit 176 and recognize it as information.
- the information recognition unit 178 since the second edge detection unit 176 detects the edge with high accuracy from the pixel value generated using the second interpolation algorithm, the information recognition unit 178 that uses the edge detects the edge. Information recognition can also be performed with high accuracy.
- the information recognition unit 178 supplies the information recognition result to the storage unit 179 and stores it in association with the entire image.
- the storage unit 179 stores the entire image supplied from the image generation unit 174, the information supplied from the information recognition unit 178, the microscope image captured by the microscope image sensor 13, and the like.
- the image display unit 180 displays an image on the display 18.
- the image display unit 180 can display the whole image, the microscope image, information associated with these, and the like on the display 18 by an arbitrary interface.
- FIG. 5 is an image processing flow showing the operation of the image processing apparatus 17.
- the output value acquisition unit 171 acquires the output value of each light receiving element 151 (St101).
- the output value acquisition unit 171 may perform noise removal (St102) on the acquired output value.
- Noise removal is a process for removing physical noise and electrical noise in the output value, and can be performed using various known techniques.
- the output value acquisition unit 171 supplies the output value of each light receiving element 151 to the first interpolation unit 172 and the second interpolation unit 173, respectively.
- the first interpolation unit 172 performs pixel value interpolation using the first interpolation algorithm (linear interpolation method or the like) as described above using the supplied output value of each light receiving element 151 (St103).
- the image generation unit 174 performs correction processing on the pixel values interpolated by the first interpolation unit 172 (St104).
- the correction process includes various correction processes such as calibration with sRGB (international standard of color space by IEC (International Electrotechnical Commission)), white balance adjustment, and gamma correction. The correction process may be performed appropriately as necessary.
- the image generation unit 174 performs trimming (cutout processing) on the pixels on which the correction processing has been executed (St105). Trimming is to remove a region other than the sample S in the visual field range of the entire image pickup device 15, and can be performed using a pixel value range (color range) or the like.
- the image generation unit 174 generates an image (entire image) from the pixel values subjected to correction processing and trimming (St106).
- the image generation unit 174 since the pixel values are interpolated by the first interpolation unit 172 using the first interpolation algorithm suitable for image generation (St103), the entire image generated by the image generation unit 174 is visible. Excellent image.
- the second interpolation unit 173 interpolates the pixel value using the second interpolation algorithm (gradient method or the like) as described above using the supplied output value of each light receiving element 151 (St107). Further, the second interpolation unit 173 may perform YUV conversion of pixel values together with interpolation of pixel values.
- the second interpolation unit 173 performs a correction process on the pixel value of each light receiving element 151 (St108).
- the correction process includes various correction processes such as gamma correction.
- the correction process may be performed appropriately as necessary.
- the first edge detection unit 175 performs edge detection using the first edge detection algorithm for the pixels on which the correction processing has been executed (St109).
- the region detection unit 177 performs region detection processing on the edge detection result of the first edge detection unit 175 to detect the specimen existing region (St110).
- the region detection unit 177 performs region detection with high accuracy. Is possible.
- the second edge detection unit 176 performs edge detection using the second edge detection algorithm for the pixels on which the correction processing has been executed (St111).
- the information recognition unit 178 performs character detection processing on the edge detection result of the second edge detection unit 176 to detect a character (St112).
- the information recognition unit 178 recognizes the detected character (St113) and acquires character information.
- the information recognition unit 178 performs code detection processing on the edge detection result of the second edge detection unit 176 to detect a code (St114).
- the information recognition unit 178 recognizes the detected code (St115) and acquires code information.
- the information recognition unit 178 performs the character information and code information with high accuracy. Can be recognized.
- the first interpolation unit 172 interpolates the pixel values using the first interpolation algorithm suitable for image generation, and the second interpolation unit 173 detects the edge.
- the pixel value is interpolated using a second interpolation algorithm suitable for the above. Accordingly, the image generation unit 174 can generate an entire image with high visibility, and the region detection unit 177 and the information recognition unit 178 can perform region detection and information recognition with high accuracy.
- Trimming (St505) and image generation of the entire image (St506) are executed for the pixels after this correction processing. Further, YUV conversion of pixel values (RGB values) is performed on the pixels after the correction processing (St507), and edge detection for area detection (St508) and edge detection for information recognition (St509) are executed.
- the region detection (St510) of the specimen existing region is performed on the edge detection result for region detection, and character detection (St511), character recognition (St512), and code detection (St513) are performed on the edge detection result for information recognition. And code recognition (St514) is performed.
- an interpolation result by one pixel value interpolation process (St503) is used for generation of an entire image (St506), region detection (St510), and information recognition (St511 to St514).
- an interpolation algorithm suitable for generating a highly visible image is used in the pixel value interpolation process, it is difficult to detect an edge, and the accuracy of area detection and information recognition is reduced.
- an interpolation algorithm suitable for edge detection is used, an entire image with strong edges and low visibility is generated. That is, in the comparative example, the visibility of the entire image and the accuracy of edge detection are contradictory, and it is difficult to improve both of them.
- an interpolation algorithm for generating an entire image (first interpolation algorithm) and an interpolation algorithm for edge detection (second interpolation algorithm) are used. For this reason, it is possible to improve both the visibility of the entire image and the accuracy of edge detection by using an optimal interpolation algorithm for each.
- FIG. 7 is a block diagram showing a functional configuration of the image processing device 27.
- the image processing device 27 includes an output value acquisition unit 271, a first interpolation unit 272, a second interpolation unit 273, a third interpolation unit 274, an image generation unit 275, a first edge detection unit 276, A two-edge detection unit 277, a region detection unit 278, an information recognition unit 279, a storage unit 280, and an image display unit 281.
- Such a functional configuration of the image processing apparatus 27 is realized by cooperation of hardware such as CPU and memory and software. Further, these configurations are not necessarily realized by a single device, but may be realized by a plurality of devices, or may be realized via a computer network or on a computer network.
- the output value acquisition unit 271 acquires the output value of each light receiving element 151 from the whole image capturing element 15.
- each light receiving element 151 is provided with a color filter of each color (R, G, or B), and the output value of each light receiving element 151 is the intensity (light / dark) of light of a specific color transmitted through the color filter. ).
- the first interpolation unit 272 generates the pixel value of the pixel corresponding to each light receiving element 151 from the output value of each light receiving element 151 using the first interpolation algorithm.
- the first interpolation algorithm may be an interpolation algorithm that interpolates the pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is small. Specific examples of such an interpolation algorithm include the linear interpolation method described in the first embodiment.
- the first interpolation algorithm is not limited to the linear interpolation method, and may be another interpolation algorithm in which a difference in pixel values is smaller than those of a second interpolation algorithm and a third interpolation algorithm described later. Further, for example, a curved surface fitting method can be used as the first interpolation algorithm.
- the first interpolation unit 272 supplies pixel values of pixels corresponding to the interpolated light receiving elements 151 to the image generation unit 275.
- the second interpolation unit 273 generates the pixel value of the pixel corresponding to each light receiving element 151 from the output value of each light receiving element 151 using the second interpolation algorithm.
- the second interpolation algorithm is an interpolation algorithm different from the first interpolation algorithm.
- an interpolation algorithm that interpolates the pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is larger than that of the first interpolation algorithm is preferable.
- the gradient method described in the first embodiment can be cited.
- the second interpolation algorithm is not limited to the gradient method, and may be another interpolation algorithm in which the difference in pixel value is larger than that of the first interpolation algorithm.
- the second interpolation unit 273 supplies the pixel value of the pixel corresponding to each interpolated light receiving element 151 to the first edge detection unit 276. Further, as in the first embodiment, the second interpolation unit 273 can execute YUV conversion of pixel values together with interpolation of pixel values using the second interpolation algorithm.
- the third interpolation unit 274 generates a pixel value of a pixel corresponding to each light receiving element 151 from the output value of each light receiving element 151 using a third interpolation algorithm.
- the third interpolation algorithm is an interpolation algorithm different from the first interpolation algorithm and the second interpolation algorithm.
- an interpolation algorithm that interpolates the pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is larger than that of the first interpolation algorithm is preferable.
- ACPI Advanced Color Interpolation method
- the ACPI method is an interpolation algorithm that can obtain a sharper image than the conventional method by adding a high-frequency component to a linear interpolation value of pixels around the pixel to be interpolated. Since linear interpolation also has a smoothing effect, high frequency components cannot be restored. Therefore, sharpening is performed by estimating high-frequency components using the output values of the pixels located around the pixel to be interpolated and adding to linear interpolation. In the ACPI method, the pixel value difference is larger than that in the linear interpolation method (first interpolation algorithm) in which each color component is simply averaged.
- the third interpolation algorithm is not limited to the ACPI method, and may be another interpolation algorithm in which the difference in pixel value is larger than that of the first interpolation algorithm.
- the third interpolation unit 274 supplies the pixel values of the pixels corresponding to the interpolated light receiving elements 151 to the second edge detection unit 277.
- the third interpolation unit 274 can also perform YUV conversion of pixel values together with pixel value interpolation using a third interpolation algorithm.
- the image generation unit 275 generates an entire image from the pixel values of the pixels corresponding to the respective light receiving elements 151 generated by the first interpolation unit 272. As described above, the first interpolation unit 272 uses the first interpolation algorithm that interpolates the pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is small. The image to be displayed is an image with excellent visibility in which the pixel value changes smoothly. The image generation unit 275 stores the generated entire image in the storage unit 280.
- the first edge detection unit 276 detects an edge from the pixel value of the pixel corresponding to each light receiving element 151 generated by the second interpolation unit 273.
- the edge is a boundary having a large difference in pixel value between adjacent pixels or a certain pixel range, and the first edge detection unit 276 detects the edge using the first edge detection algorithm.
- the first edge detection algorithm is not particularly limited, but an edge detection algorithm that allows the area detection unit 278 described later to easily perform area detection is preferable.
- the second interpolation unit 273 a second interpolation algorithm for interpolating the pixel values so as to increase the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 is used. Since the edge is easier to detect when the difference between the pixel values is larger, the first edge detection unit 276 can detect the edge with high detection accuracy. The first edge detection unit 276 supplies the edge detection result to the region detection unit 278.
- the second edge detection unit 277 detects an edge from the pixel value of the pixel corresponding to each light receiving element 151 generated by the third interpolation unit 274.
- the second edge detection unit 277 can detect an edge using a second edge detection algorithm different from the first edge detection algorithm.
- the second edge detection algorithm is not particularly limited, but a detection algorithm that allows the information recognition unit 279 described later to easily perform information recognition is preferable.
- the third interpolation unit 274 uses a third interpolation algorithm that interpolates the pixel values so that the difference between the pixel values of the pixels corresponding to the adjacent light receiving elements 151 increases. Since the edge is easier to detect when the difference between the pixel values is larger, the second edge detection unit 277 can detect the edge with high detection accuracy. The second edge detection unit 277 supplies the edge detection result to the information recognition unit 279.
- the region detection unit 278 performs region detection from the edge detection result of the first edge detection unit 276.
- the area detection is a process for detecting an area (sample presence area) in which the sample to be observed exists in the sample S described above.
- the region detection unit 278 can set the region where the edge detected by the first edge detection unit 276 is present as the specimen presence region.
- the first edge detection unit 276 detects the edge with high accuracy from the pixel value generated using the second interpolation algorithm, and therefore the region detection unit 278 that uses the edge detects the edge. It is possible to detect the area with high accuracy.
- the region detection unit 278 supplies the region detection result to the control device 16.
- the information recognition unit 279 performs information recognition from the edge detection result of the second edge detection unit 277.
- information recognition display information displayed on the sample S, for example, character information or code information (barcode or the like) is detected and recognized as information.
- the information recognition unit 279 can execute processing such as contour extraction from the edge detected by the second edge detection unit 277 and recognize it as information.
- the information recognition unit 279 since the second edge detection unit 277 detects an edge with high accuracy from the pixel value generated using the second interpolation algorithm, the information recognition unit 279 that uses the edge detects the edge. Information recognition can also be recognized with high accuracy.
- the information recognition unit 279 supplies the information recognition result to the storage unit 280 and stores it in association with the entire image.
- the storage unit 280 stores an entire image supplied from the image generation unit 275, information supplied from the information recognition unit 279, a microscope image captured by the microscope image sensor 13, and the like.
- the image display unit 281 displays an image on the display 18.
- the image display unit 281 can display the entire image, the microscope image, information associated therewith, and the like on the display 18 by an arbitrary interface.
- FIG. 8 is an image processing flow showing the operation of the image processing device 27.
- the output value acquisition unit 271 acquires the output value of each light receiving element 151 (St201).
- the output value acquisition unit 271 may perform noise removal (St202) on the acquired output value.
- Noise removal is a process for removing physical noise and electrical noise in the output value, and can be performed using various known techniques.
- the output value acquisition unit 271 supplies the output value of each light receiving element 151 to the first interpolation unit 272, the second interpolation unit 273, and the third interpolation unit 274, respectively.
- the first interpolation unit 272 performs pixel value interpolation using the first interpolation algorithm (linear interpolation method or the like) as described above using the supplied output value of each light receiving element 151 (St203).
- the image generation unit 275 executes correction processing on the pixel values interpolated by the first interpolation unit 272 (St204).
- the correction process includes various correction processes such as calibration with sRGB, white balance adjustment, and gamma correction. The correction process may be performed appropriately as necessary.
- the image generation unit 275 performs trimming (cutout processing) on the pixels on which the correction processing has been executed (St205). Trimming is to remove a region other than the sample S in the visual field range of the entire image pickup device 15, and can be performed using a pixel value range (color range) or the like.
- the image generation unit 275 generates an image (entire image) from the pixels subjected to correction processing and trimming (St206). As described above, since the pixel values are interpolated by the first interpolation unit 272 using the first interpolation algorithm suitable for image generation (St203), the entire image generated by the image generation unit 275 is visible. Excellent image.
- the second interpolation unit 273 interpolates the pixel value using the second interpolation algorithm (gradient method or the like) as described above using the supplied output value of each light receiving element 151 (St207). Further, the second interpolation unit 273 may execute YUV conversion of pixel values together with interpolation of pixel values.
- the second interpolation unit 273 executes a correction process on the pixel value of each light receiving element 151 (St208).
- the correction process includes various correction processes such as gamma correction.
- the correction process may be performed appropriately as necessary.
- the first edge detection unit 276 performs edge detection using the first edge detection algorithm for the pixels on which the correction processing has been executed (St209).
- the region detection unit 278 performs region detection processing on the edge detection result of the first edge detection unit 276 to detect the specimen existing region (St210).
- the region detection unit 278 performs region detection with high accuracy. Is possible.
- the third interpolation unit 274 interpolates the pixel value using the third interpolation algorithm (ACPI method or the like) as described above using the supplied output value of each light receiving element 151 (St211). Further, the third interpolation unit 274 can execute YUV conversion of pixel values together with interpolation of pixel values.
- ACPI method ACPI method or the like
- the third interpolation unit 274 performs a correction process on the pixel value of each light receiving element 151 (St212).
- the correction process includes various correction processes such as gamma correction.
- the correction process may be performed appropriately as necessary.
- the second edge detection unit 277 performs edge detection using the second edge detection algorithm for the pixels on which the correction processing has been executed (St213).
- the information recognition unit 279 performs character detection processing on the edge detection result of the second edge detection unit 277 to detect a character (St214).
- the information recognition unit 178 recognizes the detected character (St215), and acquires character information.
- the information recognition unit 279 performs code detection processing on the edge detection result of the second edge detection unit 277 to detect a code (St216).
- the information recognition unit 279 recognizes the detected code (St217) and acquires code information.
- the information recognition unit 279 performs the character information and code information with high accuracy. Can be recognized.
- the first interpolation unit 272 interpolates pixel values using the first interpolation algorithm suitable for image generation
- the second interpolation unit 272 and the third interpolation unit 272 The interpolation unit 274 interpolates pixel values using a second interpolation algorithm suitable for edge detection. Accordingly, the image generation unit 275 can generate an entire image with high visibility, and the region detection unit 278 and the information recognition unit 279 can perform region detection and information recognition with high accuracy.
- an edge is obtained using pixel values interpolated by different interpolation algorithms (second interpolation algorithm and third interpolation algorithm). Detection is performed. For this reason, it is possible to selectively use an interpolation algorithm (gradient method or the like) suitable for region detection and an interpolation algorithm (ACPI method or the like) suitable for information recognition, and even when compared with the first embodiment, region detection and It is possible to improve the accuracy of information recognition.
- the present technology is not limited to the above embodiments, and can be changed without departing from the gist of the present technology.
- pixel value interpolation is performed using an edge detection interpolation algorithm (second interpolation algorithm or third interpolation algorithm).
- edge detection instead of pixel value interpolation for edge detection, a threshold value is applied to the output value of each light receiving element, and binarization can be performed directly.
- information recognition character recognition or code recognition
- region detection and information recognition are performed on pixel values interpolated by an edge detection interpolation algorithm (second interpolation algorithm or third interpolation algorithm).
- edge detection interpolation algorithm second interpolation algorithm or third interpolation algorithm.
- area detection and information recognition may be performed on the pixel value interpolated by the edge detection interpolation algorithm.
- this technique can also take the following structures.
- an output value acquisition unit for acquiring each output value of the plurality of light receiving elements, A first interpolation unit that interpolates pixel values corresponding to each of the plurality of light receiving elements from the output value using a first interpolation algorithm; A second interpolation unit that interpolates pixel values corresponding to each of the plurality of light receiving elements from the output value using a second interpolation algorithm different from the first interpolation algorithm; and An image generation unit that generates an image from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the interpolation unit; A first edge detection unit that detects an edge using a first edge detection algorithm from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the second interpolation unit; An image processing apparatus.
- the first interpolation algorithm is an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is small.
- the image processing apparatus according to claim 2 wherein the second interpolation algorithm is an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is larger than that of the first interpolation algorithm.
- An image processing apparatus further comprising: a region detection unit that performs region detection from a detection result by the first edge detection unit.
- An edge is detected from a pixel value of a pixel corresponding to each of the plurality of light receiving elements interpolated by the second interpolation unit, using a second edge detection algorithm different from the first edge detection algorithm.
- An image processing apparatus further comprising: a second edge detection unit that performs information recognition from a detection result of the second edge detection unit.
- the second interpolation unit performs an YUV conversion together with the interpolation of the pixel value.
- the image processing apparatus according to any one of (1) to (5) above,
- the first interpolation algorithm is a linear interpolation method
- the image processing apparatus in which the second interpolation algorithm is a gradient method.
- the region detection unit detects a region including an edge in the detection result by the first edge detection unit as an observation target region in the sample
- the information processing unit recognizes a character or code displayed on the sample from a detection result by the second edge detection unit.
- a third interpolation algorithm that is different from the first interpolation algorithm and the second interpolation algorithm is used to interpolate pixel values of pixels corresponding to the plurality of light receiving elements from the output value.
- a second edge detection algorithm different from the first edge detection algorithm is used.
- the first interpolation algorithm is an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is small.
- the second interpolation algorithm is an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is larger than that of the first interpolation algorithm.
- the image processing apparatus according to claim 3, wherein the third interpolation algorithm is an interpolation algorithm that interpolates the pixel values so that a difference between pixel values of pixels corresponding to adjacent light receiving elements is larger than that of the first interpolation algorithm.
- An image processing apparatus further comprising: a region detection unit that performs region detection from a detection result by the first edge detection unit; and an information recognition unit that performs information recognition from a detection result by the second edge detection unit.
- the image processing apparatus according to any one of (1) to (11) above,
- the first interpolation algorithm is a linear interpolation method
- the image processing apparatus, wherein the second interpolation algorithm is a gradient method
- the third interpolation algorithm is an adaptive color brain interpolation method.
- the region detection unit detects a region including an edge in the detection result by the first edge detection unit as an observation target region in the sample
- the information processing unit recognizes a character or code displayed on the sample from a detection result by the second edge detection unit.
- an output value acquisition unit for acquiring each output value of the plurality of light receiving elements, A first interpolation unit that interpolates pixel values of pixels corresponding to each of the plurality of light receiving elements from the output value using a first interpolation algorithm; A second interpolation unit that interpolates pixel values corresponding to each of the plurality of light receiving elements from the output value using a second interpolation algorithm different from the first interpolation algorithm; An image generation unit that generates an image from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the first interpolation unit; An image that causes the information processing apparatus to function as an edge detection unit that detects an edge using an edge detection algorithm from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the second interpolation unit. Processing program.
- the output value acquisition unit acquires each output value of the plurality of light receiving elements from an image sensor having a plurality of light receiving elements arranged two-dimensionally, A first interpolation unit that interpolates pixel values corresponding to each of the plurality of light receiving elements from the output value using a first interpolation algorithm; A second interpolation unit, using a second interpolation algorithm different from the first interpolation algorithm, interpolates pixel values of pixels corresponding to each of the plurality of light receiving elements from the output value; An image generation unit generates an image from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the first interpolation unit; An image processing method in which an edge detection unit detects an edge by using an edge detection algorithm from pixel values of pixels corresponding to each of the plurality of light receiving elements interpolated by the second interpolation unit.
Abstract
Description
上記出力値取得部は、二次元的に配列された複数の受光素子を有する撮像素子から、上記複数の受光素子のそれぞれの出力値を取得する。
上記第1の補間部は、第1の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する。
上記第2の補間部は、上記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する。
上記画像生成部は、上記第1の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成する。
上記第1のエッジ検出部は上記第2の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、第1のエッジ検出アルゴリズムを利用してエッジを検出する。
上記第2の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が上記第1の補間アルゴリズムより大きくなるように上記画素値を補間する補間アルゴリズムであってもよい。
上記第2の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、上記第1のエッジ検出アルゴリズムとは異なる第2のエッジ検出アルゴリズムを利用してエッジを検出する第2のエッジ検出部と
上記第2のエッジ検出部による検出結果から、情報認識を行う情報認識部
をさらに具備してもよい。
上記第2の補間アルゴリズムは、勾配法であってもよい。
上記情報認識部は、上記第2のエッジ検出部による検出結果から上記試料に表示された文字又はコードを認識してもよい。
上記第1の補間アルゴリズム及び上記第2の補間アルゴリズムとは異なる第3の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する第3の補間部と
上記第3の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、上記第1のエッジ検出アルゴリズムとは異なる第2のエッジ検出アルゴリズムを利用してエッジを検出する第2のエッジ検出部
をさらに具備してもよい。
上記第2の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が上記第1の補間アルゴリズムより大きくなるように上記画素値を補間する補間アルゴリズムであり、
上記第3の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が上記第1の補間アルゴリズムより大きくなるように上記画素値を補間する補間アルゴリズムであってもよい。
上記第1のエッジ検出部による検出結果から領域検出を行う領域検出部と
上記第2のエッジ検出部による検出結果から情報認識を行う情報認識部
をさらに具備してもよい。
上記第3の補間部は、上記画素値の補間と共にYUV変換を行ってもよい。
上記第2の補間アルゴリズムは、勾配法であり
上記第3の補間アルゴリズムは、適応型カラーブレーン補間法である
画像処理装置。
上記情報認識部は、上記第2のエッジ検出部による検出結果から上記試料に表示された文字又はコードを認識する
画像処理装置。
上記出力値取得部は、二次元的に配列された複数の受光素子を有する撮像素子から、上記複数の受光素子のそれぞれの出力値を取得する。
上記第1の補間部は、第1の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する。
上記第2の補間部は、上記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する。
上記画像生成部は、上記第1の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成する。
上記第1のエッジ検出部は、上記第2の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、第1のエッジ検出アルゴリズムを利用してエッジを検出する。
第1の補間部が、第1の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する。
第2の補間部が、上記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する。
画像生成部が、上記第1の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成する。
エッジ検出部が、上記第2の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、エッジ検出アルゴリズムを利用してエッジを検出する。
本技術の第1の実施形態に係る顕微鏡撮像装置について説明する。
本実施形態に係る顕微鏡撮像装置について説明する。図1は、顕微鏡撮像装置1の概略構成を示す模式図である。同図に示すように、顕微鏡撮像装置1は、ステージ11、顕微鏡光学系12、顕微鏡撮像素子13、全体像光学系14、全体像撮像素子15、制御装置16、画像処理装置17及びディスプレイ18から構成されている。顕微鏡撮像素子13、全体像撮像素子15及びステージ11は制御装置16に接続され、制御装置16は画像処理装置17に接続されている。ディスプレイ18は画像処理装置17に接続され、ステージ11には、試料Sが載置されている。試料Sは、観察対象である検体(例えば病理組織)が固定されたプレパラート等である。
顕微鏡撮像装置1は次のように動作する。まず、全体像撮像素子15によって試料Sの全体画像が撮像され、全体画像データが生成される。全体画像データは、全体像撮像素子15から制御装置16を介して画像処理装置17に伝送される。
画像処理装置17の機能的構成について説明する。図3は、画像処理装置17の機能的構成を示すブロック図である。同図に示すように、画像処理装置17は、出力値取得部171、第1補間部172、第2補間部173、画像生成部174、第1エッジ検出部175、第2エッジ検出部176、領域検出部177、情報認識部178、記憶部179及び画像表示部180を有する。
上述した構成を有する画像処理装置17の動作について説明する。図5は、画像処理装置17の動作を示す画像処理フローである。
一般的な顕微鏡撮像装置において実施される画像処理フローを比較例として図6に示す。
本技術の第2の実施形態に係る顕微鏡撮像装置について説明する。なお、本実施形態に係る顕微鏡撮像装置において、画像処理装置以外の構成については第1の実施形態と同様であるから同一の符号を付し、説明を省略する。
画像処理装置27の機能的構成について説明する。図7は、画像処理装置27の機能的構成を示すブロック図である。同図に示すように、画像処理装置27は、出力値取得部271、第1補間部272、第2補間部273、第3補間部274、画像生成部275、第1エッジ検出部276、第2エッジ検出部277、領域検出部278、情報認識部279、記憶部280及び画像表示部281を有する。
上述した構成を有する画像処理装置27の動作について説明する。図8は、画像処理装置27の動作を示す画像処理フローである。
二次元的に配列された複数の受光素子を有する撮像素子から、上記複数の受光素子のそれぞれの出力値を取得する出力値取得部と、
第1の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する第1の補間部と、
上記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する第2の補間部と
上記第1の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成する画像生成部と、
上記第2の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、第1のエッジ検出アルゴリズムを利用してエッジを検出する第1のエッジ検出部と
を具備する画像処理装置。
上記(1)に記載の画像処理装置であって、
上記第1の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が小さくなるように上記画素値を補間する補間アルゴリズムであり、
上記第2の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が上記第1の補間アルゴリズムより大きくなるように上記画素値を補間する補間アルゴリズムである
画像処理装置。
上記(1)又は(2)に記載の画像処理装置であって、
上記第1のエッジ検出部による検出結果から領域検出を行う領域検出部
をさらに具備する画像処理装置。
上記(1)乃至(3)のいずれか一つに記載の画像処理装置であって、
上記第2の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、上記第1のエッジ検出アルゴリズムとは異なる第2のエッジ検出アルゴリズムを利用してエッジを検出する第2のエッジ検出部と
上記第2のエッジ検出部による検出結果から、情報認識を行う情報認識部
をさらに具備する画像処理装置。
上記(1)乃至(4)のいずれか一つに記載の画像処理装置であって、
上記第2の補間部は、上記画素値の補間と共にYUV変換を行う
画像処理装置。
上記(1)乃至(5)のいずれか一つに記載の画像処理装置であって、
上記第1の補間アルゴリズムは、線形補間法であり、
上記第2の補間アルゴリズムは、勾配法である
画像処理装置。
上記(1)乃至(6)のいずれか一つに記載の画像処理装置であって、、
上記領域検出部は、上記第1のエッジ検出部による検出結果においてエッジを含む領域を試料における観察対象領域として検出し、
上記情報認識部は、上記第2のエッジ検出部による検出結果から上記試料に表示された文字又はコードを認識する
画像処理装置。
上記(1)乃至(7)のいずれか一つに記載の画像処理装置であって、、
上記第1の補間アルゴリズム及び上記第2の補間アルゴリズムとは異なる第3の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する第3の補間部と
上記第3の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、上記第1のエッジ検出アルゴリズムとは異なる第2のエッジ検出アルゴリズムを利用してエッジを検出する第2のエッジ検出部
をさらに具備する画像処理装置。
上記(1)乃至(8)のいずれか一つに記載の画像処理装置であって、
上記第1の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が小さくなるように上記画素値を補間する補間アルゴリズムであり、
上記第2の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が上記第1の補間アルゴリズムより大きくなるように上記画素値を補間する補間アルゴリズムであり、
上記第3の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が上記第1の補間アルゴリズムより大きくなるように上記画素値を補間する補間アルゴリズムである
画像処理装置。
上記(1)乃至(9)のいずれか一つに記載の画像処理装置であって、
上記第1のエッジ検出部による検出結果から領域検出を行う領域検出部と
上記第2のエッジ検出部による検出結果から情報認識を行う情報認識部
をさらに具備する画像処理装置。
上記(1)乃至(10)のいずれか一つに記載の画像処理装置であって、
上記第2の補間部は、上記画素値の補間と共にYUV変換を行い
上記第3の補間部は、上記画素値の補間と共にYUV変換を行う
画像処理装置。
上記(1)乃至(11)のいずれか一つに記載の画像処理装置であって、
上記第1の補間アルゴリズムは、線形補間法であり、
上記第2の補間アルゴリズムは、勾配法であり
上記第3の補間アルゴリズムは、適応型カラーブレーン補間法である
画像処理装置。
上記(1)乃至(12)のいずれか一つに記載の画像処理装置であって、
上記領域検出部は、上記第1のエッジ検出部による検出結果においてエッジを含む領域を試料における観察対象領域として検出し、
上記情報認識部は、上記第2のエッジ検出部による検出結果から上記試料に表示された文字又はコードを認識する
画像処理装置。
二次元的に配列された複数の受光素子を有する撮像素子から、上記複数の受光素子のそれぞれの出力値を取得する出力値取得部と、
第1の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する第1の補間部と、
上記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間する第2の補間部と、
上記第1の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成する画像生成部と、
上記第2の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、エッジ検出アルゴリズムを利用してエッジを検出するエッジ検出部と
として情報処理装置を機能させる画像処理プログラム。
出力値取得部が、二次元的に配列された複数の受光素子を有する撮像素子から、上記複数の受光素子のそれぞれの出力値を取得し、
第1の補間部が、第1の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間し、
第2の補間部が、上記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、上記出力値から上記複数の受光素子のそれぞれに対応する画素の画素値を補間し、
画像生成部が、上記第1の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成し、
エッジ検出部が、上記第2の補間部によって補間された、上記複数の受光素子のそれぞれに対応する画素の画素値から、エッジ検出アルゴリズムを利用してエッジを検出する
画像処理方法。
17、27…画像処理装置
15…全体像撮像素子
151…受光素子
171…出力値取得部
172…第1補間部
173…第2補間部
174…画像生成部
175…第1エッジ検出部
176…第2エッジ検出部
177…領域検出部
178…情報認識部
271…出力値取得部
272…第1補間部
273…第2補間部
274…第3補間部
275…画像生成部
276…第1エッジ検出部
277…第2エッジ検出部
278…領域検出部
279…情報認識部
Claims (15)
- 二次元的に配列された複数の受光素子を有する撮像素子から、前記複数の受光素子のそれぞれの出力値を取得する出力値取得部と、
第1の補間アルゴリズムを利用して、前記出力値から前記複数の受光素子のそれぞれに対応する画素の画素値を補間する第1の補間部と、
前記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、前記出力値から前記複数の受光素子のそれぞれに対応する画素の画素値を補間する第2の補間部と
前記第1の補間部によって補間された、前記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成する画像生成部と、
前記第2の補間部によって補間された、前記複数の受光素子のそれぞれに対応する画素の画素値から、第1のエッジ検出アルゴリズムを利用してエッジを検出する第1のエッジ検出部と
を具備する画像処理装置。 - 請求項1に記載の画像処理装置であって、
前記第1の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が小さくなるように前記画素値を補間する補間アルゴリズムであり、
前記第2の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が前記第1の補間アルゴリズムより大きくなるように前記画素値を補間する補間アルゴリズムである
画像処理装置。 - 請求項2に記載の画像処理装置であって、
前記第1のエッジ検出部による検出結果から領域検出を行う領域検出部
をさらに具備する画像処理装置。 - 請求項3に記載の画像処理装置であって、
前記第2の補間部によって補間された、前記複数の受光素子のそれぞれに対応する画素の画素値から、前記第1のエッジ検出アルゴリズムとは異なる第2のエッジ検出アルゴリズムを利用してエッジを検出する第2のエッジ検出部と
前記第2のエッジ検出部による検出結果から、情報認識を行う情報認識部
をさらに具備する画像処理装置。 - 請求項2に記載の画像処理装置であって、
前記第2の補間部は、前記画素値の補間と共にYUV変換を行う
画像処理装置。 - 請求項2に記載の画像処理装置であって、
前記第1の補間アルゴリズムは、線形補間法であり、
前記第2の補間アルゴリズムは、勾配法である
画像処理装置。 - 請求項4に記載の画像処理装置であって、
前記領域検出部は、前記第1のエッジ検出部による検出結果においてエッジを含む領域を試料における観察対象領域として検出し、
前記情報認識部は、前記第2のエッジ検出部による検出結果から前記試料に表示された文字又はコードを認識する
画像処理装置。 - 請求項1に記載の画像処理装置であって、
前記第1の補間アルゴリズム及び前記第2の補間アルゴリズムとは異なる第3の補間アルゴリズムを利用して、前記出力値から前記複数の受光素子のそれぞれに対応する画素の画素値を補間する第3の補間部と
前記第3の補間部によって補間された、前記複数の受光素子のそれぞれに対応する画素の画素値から、前記第1のエッジ検出アルゴリズムとは異なる第2のエッジ検出アルゴリズムを利用してエッジを検出する第2のエッジ検出部
をさらに具備する画像処理装置。 - 請求項8に記載の画像処理装置であって、
前記第1の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が小さくなるように前記画素値を補間する補間アルゴリズムであり、
前記第2の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が前記第1の補間アルゴリズムより大きくなるように前記画素値を補間する補間アルゴリズムであり、
前記第3の補間アルゴリズムは、隣接する受光素子にそれぞれ対応する画素の画素値の差が前記第1の補間アルゴリズムより大きくなるように前記画素値を補間する補間アルゴリズムである
画像処理装置。 - 請求項9に記載の画像処理装置であって、
前記第1のエッジ検出部による検出結果から領域検出を行う領域検出部と
前記第2のエッジ検出部による検出結果から情報認識を行う情報認識部
をさらに具備する画像処理装置。 - 請求項9に記載の画像処理装置であって、
前記第2の補間部は、前記画素値の補間と共にYUV変換を行い
前記第3の補間部は、前記画素値の補間と共にYUV変換を行う
画像処理装置。 - 請求項9に記載の画像処理装置であって、
前記第1の補間アルゴリズムは、線形補間法であり、
前記第2の補間アルゴリズムは、勾配法であり
前記第3の補間アルゴリズムは、適応型カラーブレーン補間法である
画像処理装置。 - 請求項10に記載の画像処理装置であって、
前記領域検出部は、前記第1のエッジ検出部による検出結果においてエッジを含む領域を試料における観察対象領域として検出し、
前記情報認識部は、前記第2のエッジ検出部による検出結果から前記試料に表示された文字又はコードを認識する
画像処理装置。 - 二次元的に配列された複数の受光素子を有する撮像素子から、前記複数の受光素子のそれぞれの出力値を取得する出力値取得部と、
第1の補間アルゴリズムを利用して、前記出力値から前記複数の受光素子のそれぞれに対応する画素の画素値を補間する第1の補間部と、
前記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、前記出力値から前記複数の受光素子のそれぞれに対応する画素の画素値を補間する第2の補間部と、
前記第1の補間部によって補間された、前記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成する画像生成部と、
前記第2の補間部によって補間された、前記複数の受光素子のそれぞれに対応する画素の画素値から、エッジ検出アルゴリズムを利用してエッジを検出するエッジ検出部と
として情報処理装置を機能させる画像処理プログラム。 - 出力値取得部が、二次元的に配列された複数の受光素子を有する撮像素子から、前記複数の受光素子のそれぞれの出力値を取得し、
第1の補間部が、第1の補間アルゴリズムを利用して、前記出力値から前記複数の受光素子のそれぞれに対応する画素の画素値を補間し、
第2の補間部が、前記第1の補間アルゴリズムとは異なる第2の補間アルゴリズムを利用して、前記出力値から前記複数の受光素子のそれぞれに対応する画素の画素値を補間し、
画像生成部が、前記第1の補間部によって補間された、前記複数の受光素子のそれぞれに対応する画素の画素値から画像を生成し、
エッジ検出部が、前記第2の補間部によって補間された、前記複数の受光素子のそれぞれに対応する画素の画素値から、エッジ検出アルゴリズムを利用してエッジを検出する
画像処理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014536575A JP6156383B2 (ja) | 2012-09-24 | 2013-08-29 | 画像処理装置、画像処理プログラム及び画像処理方法 |
EP13839221.2A EP2881914B1 (en) | 2012-09-24 | 2013-08-29 | Image processing apparatus, image processing program and image processing method |
US14/429,133 US10739573B2 (en) | 2012-09-24 | 2013-08-29 | Image processing apparatus and image processing method for achieving visibility of an entire image used for capturing a microscopic image and accuracy of edge detection |
US16/928,290 US11467387B2 (en) | 2012-09-24 | 2020-07-14 | Image processing apparatus and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-209420 | 2012-09-24 | ||
JP2012209420 | 2012-09-24 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/429,133 A-371-Of-International US10739573B2 (en) | 2012-09-24 | 2013-08-29 | Image processing apparatus and image processing method for achieving visibility of an entire image used for capturing a microscopic image and accuracy of edge detection |
US16/928,290 Continuation US11467387B2 (en) | 2012-09-24 | 2020-07-14 | Image processing apparatus and image processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014045525A1 true WO2014045525A1 (ja) | 2014-03-27 |
Family
ID=50340867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/005097 WO2014045525A1 (ja) | 2012-09-24 | 2013-08-29 | 画像処理装置、画像処理プログラム及び画像処理方法 |
Country Status (4)
Country | Link |
---|---|
US (2) | US10739573B2 (ja) |
EP (1) | EP2881914B1 (ja) |
JP (1) | JP6156383B2 (ja) |
WO (1) | WO2014045525A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020065442A1 (ja) * | 2018-09-28 | 2020-04-02 | 株式会社半導体エネルギー研究所 | 画像処理方法、プログラム、及び撮像装置 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6790734B2 (ja) * | 2016-11-02 | 2020-11-25 | 株式会社ニコン | 装置、方法、およびプログラム |
US10726522B2 (en) * | 2018-01-24 | 2020-07-28 | Fotonation Limited | Method and system for correcting a distorted input image |
CN110705576B (zh) * | 2019-09-29 | 2020-09-22 | 慧影医疗科技(北京)有限公司 | 区域轮廓确定方法、装置及图像显示设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000331143A (ja) * | 1999-05-14 | 2000-11-30 | Mitsubishi Electric Corp | 画像処理方法 |
JP2005012479A (ja) * | 2003-06-18 | 2005-01-13 | Sharp Corp | データ処理装置、画像処理装置、カメラおよびデータ処理方法 |
JP2009125432A (ja) * | 2007-11-27 | 2009-06-11 | Topcon Corp | 眼科撮影装置及び眼科画像処理装置 |
JP2011180925A (ja) * | 2010-03-02 | 2011-09-15 | Ricoh Co Ltd | 画像処理装置、撮像装置及び画像処理方法 |
JP2012117930A (ja) | 2010-12-01 | 2012-06-21 | Sony Corp | 検体領域検出方法、検体領域検出装置及び検体領域検出プログラム |
JP2012175241A (ja) * | 2011-02-18 | 2012-09-10 | Toshiba Corp | 画像処理装置 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS58137384A (ja) * | 1982-02-10 | 1983-08-15 | Sony Corp | カラ−カメラの信号処理回路 |
JP2893801B2 (ja) * | 1990-02-26 | 1999-05-24 | ソニー株式会社 | テレビジョン受信機 |
JPH08190611A (ja) * | 1995-01-06 | 1996-07-23 | Ricoh Co Ltd | 画像処理装置 |
US6404927B1 (en) * | 1999-03-15 | 2002-06-11 | Exar Corporation | Control point generation and data packing for variable length image compression |
JP2004038794A (ja) * | 2002-07-05 | 2004-02-05 | Toyota Motor Corp | 画像処理装置、及び画像処理方法 |
KR100505663B1 (ko) * | 2003-01-02 | 2005-08-03 | 삼성전자주식회사 | 적응형 윤곽 상관 보간에 의한 디스플레이 장치의 순차주사 방법 |
KR100548611B1 (ko) * | 2003-08-07 | 2006-01-31 | 삼성전기주식회사 | 영상 처리에 있어서의 에지 강조를 위한 장치 및 방법 |
US7212214B2 (en) * | 2004-09-03 | 2007-05-01 | Seiko Epson Corporation | Apparatuses and methods for interpolating missing colors |
US7064770B2 (en) | 2004-09-09 | 2006-06-20 | Silicon Optix Inc. | Single-pass image resampling system and method with anisotropic filtering |
JP2007110291A (ja) * | 2005-10-12 | 2007-04-26 | Canon Inc | 画像処理方法及び画像処理装置 |
KR100809687B1 (ko) * | 2006-02-28 | 2008-03-06 | 삼성전자주식회사 | 영상신호에 포함된 잡음을 제거할 수 있는 영상신호처리장치 및 방법 |
US8331720B2 (en) * | 2007-08-15 | 2012-12-11 | Japan Science And Technology Agency | Image processing device, method, and program |
US20090059094A1 (en) * | 2007-09-04 | 2009-03-05 | Samsung Techwin Co., Ltd. | Apparatus and method for overlaying image in video presentation system having embedded operating system |
JP5098054B2 (ja) * | 2007-11-22 | 2012-12-12 | オリンパス株式会社 | 画像処理装置及び画像処理プログラム |
KR101446772B1 (ko) * | 2008-02-04 | 2014-10-01 | 삼성전자주식회사 | 디지털 영상 처리 장치 및 그 제어 방법 |
US8013911B2 (en) * | 2009-03-30 | 2011-09-06 | Texas Instruments Incorporated | Method for mixing high-gain and low-gain signal for wide dynamic range image sensor |
JP5381930B2 (ja) * | 2010-08-20 | 2014-01-08 | 株式会社Jvcケンウッド | 映像制御装置および映像制御方法 |
US9516389B1 (en) * | 2010-12-13 | 2016-12-06 | Pixelworks, Inc. | On screen display detection |
-
2013
- 2013-08-29 US US14/429,133 patent/US10739573B2/en active Active
- 2013-08-29 WO PCT/JP2013/005097 patent/WO2014045525A1/ja active Application Filing
- 2013-08-29 JP JP2014536575A patent/JP6156383B2/ja active Active
- 2013-08-29 EP EP13839221.2A patent/EP2881914B1/en not_active Not-in-force
-
2020
- 2020-07-14 US US16/928,290 patent/US11467387B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000331143A (ja) * | 1999-05-14 | 2000-11-30 | Mitsubishi Electric Corp | 画像処理方法 |
JP2005012479A (ja) * | 2003-06-18 | 2005-01-13 | Sharp Corp | データ処理装置、画像処理装置、カメラおよびデータ処理方法 |
JP2009125432A (ja) * | 2007-11-27 | 2009-06-11 | Topcon Corp | 眼科撮影装置及び眼科画像処理装置 |
JP2011180925A (ja) * | 2010-03-02 | 2011-09-15 | Ricoh Co Ltd | 画像処理装置、撮像装置及び画像処理方法 |
JP2012117930A (ja) | 2010-12-01 | 2012-06-21 | Sony Corp | 検体領域検出方法、検体領域検出装置及び検体領域検出プログラム |
JP2012175241A (ja) * | 2011-02-18 | 2012-09-10 | Toshiba Corp | 画像処理装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2881914A4 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020065442A1 (ja) * | 2018-09-28 | 2020-04-02 | 株式会社半導体エネルギー研究所 | 画像処理方法、プログラム、及び撮像装置 |
JPWO2020065442A1 (ja) * | 2018-09-28 | 2021-10-07 | 株式会社半導体エネルギー研究所 | 画像処理方法、プログラム、及び撮像装置 |
US11631708B2 (en) | 2018-09-28 | 2023-04-18 | Semiconductor Energy Laboratory Co., Ltd. | Image processing method, program, and imaging device |
JP7395490B2 (ja) | 2018-09-28 | 2023-12-11 | 株式会社半導体エネルギー研究所 | 画像処理方法、プログラム、及び撮像装置 |
Also Published As
Publication number | Publication date |
---|---|
US20200341252A1 (en) | 2020-10-29 |
EP2881914A4 (en) | 2016-05-25 |
US20150248000A1 (en) | 2015-09-03 |
US11467387B2 (en) | 2022-10-11 |
JP6156383B2 (ja) | 2017-07-05 |
EP2881914B1 (en) | 2018-04-25 |
EP2881914A1 (en) | 2015-06-10 |
JPWO2014045525A1 (ja) | 2016-08-18 |
US10739573B2 (en) | 2020-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11467387B2 (en) | Image processing apparatus and image processing method | |
US9756266B2 (en) | Sensor data rescaler for image signal processing | |
JP6576921B2 (ja) | マルチスペクトル撮像のための自動焦点方法およびシステム | |
US20130021504A1 (en) | Multiple image processing | |
US8854448B2 (en) | Image processing apparatus, image display system, and image processing method and program | |
CN112367459B (zh) | 图像处理方法、电子装置及非易失性计算机可读存储介质 | |
JP5372068B2 (ja) | 撮像システム、画像処理装置 | |
JP2004260821A (ja) | 画像データを取込み、かつフィルタリングするイメージセンサ | |
JP2008516501A (ja) | デジタル画像のフレーミング及び鮮明さを検査するためのシステム及び方法 | |
JP6099477B2 (ja) | 撮像装置、顕微鏡システム及び撮像方法 | |
US11922598B2 (en) | Image processing apparatus, image processing method, and storage medium | |
JP2008511899A (ja) | リニア・アレイを用いたマイクロスコープ・スライド・スキャナにおけるデータ管理システムおよび方法 | |
KR101792564B1 (ko) | 영상 처리 방법 및 이를 이용한 영상 처리 장치 | |
JP2014146872A (ja) | 画像処理装置、撮像装置、画像処理方法及びプログラム | |
RU2626551C1 (ru) | Способ формирования панорамных изображений из видеопотока кадров в режиме реального времени | |
CN114830626A (zh) | 摄像装置、摄像装置的工作方法、程序及摄像系统 | |
JP4271648B2 (ja) | 画像合成装置、撮像手段、およびプログラム | |
JP2011108250A (ja) | リニア・アレイを用いたマイクロスコープ・スライド・スキャナにおけるデータ管理システムおよび方法 | |
CN112640430A (zh) | 成像元件、摄像装置、图像数据处理方法及程序 | |
JP7415079B2 (ja) | 撮像装置、撮像方法、及びプログラム | |
JP5092958B2 (ja) | 画像処理装置、及び、プログラム | |
JP6314281B1 (ja) | 画像処理方法及び前景領域取得方法 | |
JP6639120B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
CN115735211A (zh) | 信息处理装置、学习设备、摄像装置、信息处理装置的控制方法及程序 | |
JP2010021835A (ja) | 撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13839221 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014536575 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013839221 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14429133 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |