EP1025546A1 - Processeur pipeline destine a l'analyse d'images medicales et biologiques - Google Patents
Processeur pipeline destine a l'analyse d'images medicales et biologiquesInfo
- Publication number
- EP1025546A1 EP1025546A1 EP97913051A EP97913051A EP1025546A1 EP 1025546 A1 EP1025546 A1 EP 1025546A1 EP 97913051 A EP97913051 A EP 97913051A EP 97913051 A EP97913051 A EP 97913051A EP 1025546 A1 EP1025546 A1 EP 1025546A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- pipeline
- image
- stage
- segmentation
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
Definitions
- the present invention relates to processor architectures, and more particularly to a pipeline processor for medical and biological image analysis.
- x-ray photographs can be used to assess bone or soft tissue or microscopic images of cells can be used to identify disease and estimate prognosis. If these images are further transformed into their digital representations then computing machines can be used to enhance the image clarity, identify key components, or even conclude an automatic evaluation. These images tend to be highly complex and full of information, and put high demands on computing machines.
- Fig. 1 shows the principle processing steps according to classical image analysis.
- the first step 1 involves digitizing a visual field 10 into a spatially-discrete set of elements or pixels 12 (shown individually as 12a and 12b in Fig. 1). Each pixel 12 comprises a digital representation of the integrated light intensity within that spatially-discrete region of the visual scene 10.
- the image 12 is segmented in a processing step 2 that aims to separate the regions or objects of interest 14a, 14b in the image from the background. Segmentation is often a difficult and computationally intensive task.
- each distinct object 14 located in the segmentation phase is uniquely labeled as object "y" (16a) and object w x" (16b) so that the identity of the object 16 can be recovered.
- step 4 a series of mathematical measurements 18 are calculated for each object 16 which encapsulate the visual appearance of each object as a set of numerical quantities. In the art, this step is known as feature extraction.
- step 5 involves classifying the extracted features 18 using any one of a variety of known hierarchical classification algorithms to arrive at a classification or identification of the segmented objects.
- the computation is divided into a number of computational operations, much like an assembly line of computer operations.
- image processing if a complex set of image processing tasks can be broken down into a set of smaller tasks that can be performed serially, and if the raw data for processing can be broken down as well, then a pipeline architecture can be implemented.
- the raw data comprises the digitized image 12 (Fig. 1) .
- the operations must be capable of being performed serially and on one small portion of the image at a time and cannot require any more information than available before the portion of image reached the task in question.
- the pipeline processor is able to provide an increase in processing speed that is proportional to the length of the pipeline.
- coarse-grained pipelining 20 In pipeline processing, two approaches have emerged: coarse-grained pipelining 20 and fine-grained pipelining 30 as shown in Fig. 2.
- coarse-grained pipelining the image processing task is broken into rather large operational blocks 22a, 22b, where each of the blocks 22 comprises a complex set of operations.
- the coarsegrained approach utilizes higher-level computational structures to successfully perform the computing tasks.
- Each of the higher-level computational structures 22 can themselves be pipelined, if the operations at that level lend themselves to pipelining.
- the computational task In the fine-grained approach, the computational task is broken down into a fundamental set of logical operation blocks: AND 32a, OR 32b, NAND 32c, NOR 32d, NOT 32e and XOR 32f.
- Fine-grained pipeline architectures are the most difficult to design in general but offer an approach to the theoretical maximum rate of operation. - A -
- Johnston et al . in published PCT Patent Application No. WO 93/16438 discloses an apparatus and method for rapidly processing data sequences.
- the Johnston system comprises a massively parallel architecture with a distributed memory.
- the results of each processing step are passed to the next processing stage by a complex memory sharing scheme.
- each discrete or atomic operation must process an entire frame before any results are available for following operations.
- Total processing time is on the order of the time to process one image frame multiplied by the number of atomic operations.
- the number of image storage and processing elements represents the total processing time divided by the image scan time. In order to reduce the hardware requirements, it is necessary to use high speed logic and memory.
- Another problem in the art involves the interpretation of the boundary regions of digital images.
- boundary regions present a special problem because the objects can be cut-off or distorted.
- overlapping digitization procedures eliminate this difficulty at the global level. Nevertheless, it is necessary for the computer system to realize the proper boundary of the image in. order to restrict operations in this special area. Looking at the Johnston system, the system software would have to be recompiled with new constants. In the alternative, image size variables would need to be supplied to each processor, requiring these variables to be checked after every pixel operation.
- Pipeline processors are typically required in applications where data must be processed at rates of approximately 200 million bits of digital information per second. When this information is to be processed for image analysis, the number of operations required may easily approach 50 billion per second. Accordingly, a practical pipeline processor must be able to handle such data volumes .
- the present invention provides a pipeline processor architecture suitable for medical and biological image analysis.
- the pipeline processor comprises finegrained processing pipelines wherein the computational tasks are broken down into fundamental logic operations.
- each processing stage directly connects to the inputs of the next stage.
- each atomic operation requires only one row and 3 clock cycles from input to output.
- the total amount of storage for intermediate results is greatly reduced.
- most of the storage elements can be of the form of data shift registers, eliminating the need for memory addressing and reducing the memory control requirements.
- the required processing speed is reduced to image scan time divided by the number of pixels. In practice this is approximately two-thirds of the pixel scan rate, allowing the use of much lower speed, more reliable, and cheaper hardware, and fewer elements as well.
- the pipeline processor system utilizes a frame and line synchronization scheme which allows the boundary of each image to be detected by a simple logic operation. These synchronization, or synch, signals are pipelined to each processing stage. This allows image dimensions to be determined within the limits of the available storage during the hardware setup operation.
- the pipeline processor also features a debug buffer.
- the debug buffer provides a feedback path for examining the results the pipeline operations without modifying normal operation of the pipeline.
- the debug buffer is useful for performing system self-checks and performance monitoring.
- the present invention provides a pipeline processor for processing images, said pipeline processor comprising: an input stage for receiving said image, a segmentation pipeline stage coupled to the output of said input stage, said segmentation pipeline stage including means for segmenting said image into selected portions, a feature extraction pipeline stage coupled to the output of said segmentation pipeline stage, said feature extraction pipeline stage including means for associating features with said selected portions; and an output stage for outputting information associated with the processing of said image, and a controller for controlling the operation of said pipeline stages.
- the present invention provides in a pipeline processor for processing images comprising an input stage for receiving an image of a biological specimen, a segmentation pipeline stage coupled to ' the output of the input stage for segmenting said image into selected portions, a feature extraction pipeline for associating features with the selected portions, and an output stage for outputting information associated with the processing of said image, a hardware organization comprising: a backplane having a plurality of slots adapted for receiving cards carrying electronic circuitry; the cards including, a processor card for carrying a control processor, an output card for carrying a memory circuit for storing information processed by the pipeline stages and a communication interface for transferring said information to another computer, one or more module cards, wherein each of said module cards includes means for receiving a plurality of pipeline cards, each of said pipeline cards comprising modules of the segmentation and feature extraction pipeline stages; said backplane including bus means for transferring information and control signals between said cards plugged into the slots of said backplane .
- Fig. 1 shows the processing steps according to classical image analysis techniques
- Fig. 2 shows in block diagram two approaches to pipeline processor architectures
- FIG. 3 shows in block diagram form a pipeline processor according to the present invention
- Fig. 4 shows the pipeline processor according to the invention in the context of a high-speed image analysis system
- Fig. 5 shows the acquisition of spectral images by the camera sub-system for the image analysis system of Fig. 4;
- Fig. 6 shows the hardware organization of the pipeline processor according to the present invention
- Fig. 7 shows the data distribution on the custom backplane of Fig. 6;
- Fig. 8 shows a quad card for the hardware organization of Fig. 6;
- Fig. 9 shows the timing for data transfer on the busses for the backplane
- Fig. 10 shows the input levelling stage of the pipeline processor in more detail
- Fig. 11 shows a general filter stage for the segmentation pipeline
- Fig. 12 shows a general stage of the feature extraction pipeline in the pipeline processor.
- Fig. 3 shows in block diagram form a pipeline processor 100 according to the invention.
- the major sub-systems of the pipeline processor 100 are a control processor 110, a segmentation sub-system pipeline 120, a feature extraction pipeline sub-system 130, and an uplink transfer module 140.
- the sub-systems for the pipeline processor 100 are carried by a custom backplane 202 and hardware arrangement 200 according to another aspect of the invention.
- the pipeline processor 100 forms the "front-end" of a high-speed image processing system 50 as shown in Fig. 4.
- the pipeline processor 100 performs the pre-processing of images for a final classification or analysis.
- the preprocessing steps include conditioning the image and segmentation of the image for feature extraction. When these operations are complete, the transformation of the digital image into segmented objects with features is complete and subsequent pattern classification and analysis can be rapidly concluded.
- the pipeline processor features a general purpose design that allows the computational elements to be rapidly reconfigured or modified.
- the high-speed image processing system 50 shown in Fig. 4 is for the automated evaluation and analysis of Pap monolayer specimens.
- the image processing system 50 comprises a camera sub-system 52, a control computer 54, a host computer 56, and a series of peripheral devices 58.
- the control computer 54 provides the overall control for the system 50 and includes a peripheral control interface 55 for controlling the peripheral devices 58.
- the peripheral devices include a bar code reader 58a, a focussing system 58b, a scanner 58c, and a slide loader 58d.
- the peripheral devices 58 do not form part of the present invention and are described merely to provide an overview of the image processing system 50.
- the focussing system and the slide loader are subject of pending application Nos.
- the host computer 56 includes a communication interface 57 for receiving processed data from the uplink transfer module 140 in the pipeline processor 100.
- the principal function of the host computer 56 is to classify the processed data according to a classification algorithm.
- the control 54 and host 56 computers are linked by a serial RS232 communication link 59.
- the control computer 52 is responsible for the overall direction of the pipeline processor 100 and the image analysis system 50 (i.e. the cytological instrument) .
- pipeline processor 100 is described in the context of an image analysis system 50 for detecting precursors to cervical cancer in cytological specimens prepared by standard monolayer techniques and stained in accordance with usual laboratory procedures, the pipeline processor 100 according to the invention provides an architecture which facilitates rapid reconfiguration for other related medical or biological image analysis.
- the camera sub-system 52 comprises a light source 61 and an array of charge coupled devices 62 (CCD's). As depicted in Fig. 5, the camera sub-system 52 generates a series of three digital images II, 12 and 13 from the slide containing a Pap monolayer specimen S.
- the monolayer specimen comprises cervical cells and related cytological components which have been prepared according to the well- known Pap protocol. In order to be viewed in the visible spectrum range, these cells are stained according to the Papanicolaou protocol, and each of the digital images II, 12, 13 corresponds to a narrow spectral band.
- the three narrow spectral bands for the images II, 12, 13 are chosen so as to maximize the contrast among the various important elements of the cervical cells as stained under the Papanicolaou protocol.
- the pipeline processor 100 comprises three parallel pipelines, one for each channel or spectral band. Each of these pipeline channels can operate independently or may contribute data to its neighbours under certain circumstances .
- the pipeline processor 110 includes an input conditioning module 150, a high-speed receiver module 152, an analog control module 154, and a debug buffer module 155 as shown.
- the high-speed receiver 152 and analog control 154 modules interface the pipeline processor 100 to the camera sub-system 52.
- the debug buffer module 155 is coupled to the control bus for the control processor 110.
- the debug buffer 155 provides a feedback path for examining the results of pipeline operations in real-time without modifying normal operation of the pipeline. This information is useful for automatic detection and diagnosis of hardware faults.
- the pipeline processor 110 also includes a bidirectional communication interface 156 for communicating with the control computer 54.
- the pipeline processor 110 also includes a general purpose serial RS232 port 158 and a general purpose parallel (i.e. printer) port 160.
- the uplink transfer module 140 includes a communication interface 142 for transferring processed data from the pipeline 100 to the host computer 56.
- the receiver module 152, and the communication modules 142, 156 are preferably implemented as fiber-optic based links in order to provide a high speed communication link with a very wide bandwidth.
- the receiver interface module 152 is used to receive the output images from the camera sub-system 52.
- the bi-directional communication interface 156 is used for receiving control commands and status requests from the control computer 54.
- the uplink communication interface 142 is used to send segmentation and feature extraction results generated by the pipeline processor 100 to a classification module in the host computer 56.
- Fig. 6 shows a hardware organization 200 for a pipeline processor 100 according to the present invention.
- the hardware organization 200 allows the computational elements to be rapidly reconfigured or modified for different types of image processing applications.
- the hardware organization 200 for the pipeline processor 100 comprises a backplane 202 and a set of printed-circuit cards which plug into the backplane 202.
- the printed-circuit cards include a processor and input card 204, uplink communications card
- the backplane 202 also carries two serially connected data busses: video-out bus 205a and video-in bus 205b.
- the layout of the parallel 203 and serial 205 busses on the backplane 202 is shown in more detail in Fig. 7.
- the pipeline modules 208 comprise "quad" module cards that can accept up to four smaller pipeline modules 209.
- this arrangement allows for rapid prototyping, reconfiguration and modification of the pipeline processor 100.
- the quad module cards 208 include a circuit arrangement 300 for controlling the direction of data flow between the backplane 202 and the plug-in modules 209 on the card 208 without the need for external jumpers.
- the circuit 300 comprises a field programmable gate array (FPGA) 301 and a set of transceivers 302 shown individually as 302a, 302b and 302c.
- the first transceiver 302 couples the quad module card 308 to the F-bus 203d.
- the second transceiver 302b couples the quad module card 208 to the S-bus 203b, and the third transceiver 303c couples the card 208 to the L-bus 203c.
- the FPGA 301 sets the appropriate direction of data flow into or out of the backplane 202 and the plug-in modules 209.
- the six data busses 203a to 203d and 205a to 205b on the backplane 202 provide the means for distributing information to and from the control processor card 204, the uplink communications card 206, and the quad module cards 208.
- the six busses are arranged as 26-bit signal busses on a pair of 96-pin DIN connectors.
- the backplane 202 distributes power and a small set of global signals to all of the cards 204 to 208.
- the backplane 202 also includes a mechanism for identifying each card slot during a reset through the use of a 4-bit slot identification.
- the video-out 205a and video-in 205b busses comprise respective 26-bit signal busses.
- the video busses 205a, 205b are connected in series through the cards inserted in the backplane 202, for example, the control processor card 204 and one of the quad module cards 208a as shown in Fig. 7.
- the serial connection scheme means that the control processor 204 and quad module 208 cards should be mounted next to each other in the backplane 202 to avoid breaking the serial link.
- the B-bus 203a, S-bus 203b, L-bus 203c and F-bus 203d provide the data busses for the backplane 202.
- the four data busses 203a- 203d are arranged in parallel so that any card 204 to 208 plugged into the backplane 202 is coupled to the busses for data transfer in or out of the card.
- the Levelled image or L-bus 203c is driven by the input levelling or conditioning circuit 150 (Fig. 4).
- L-bus 203c provides normalized image data for each of the modules 209 in the backplane 202.
- the L-bus 203c is synchronized with the Segmentation data or S-bus 203b.
- the Segmentation bus 203b is driven by a segmentation output stage 121 in the segmentation pipeline 120 (Fig. 4) which is found on the quad card 209.
- the segmentation output module 121 provides a set of binary images of the segmentation maps generated in the segmentation pipeline 120 as well as two label maps (one each for cytoplasm and nuclei) .
- the Feature or F-bus 203d carries smoothed image information from the input levelling module 150 during each image frame. During this time, the frame and line synchronization bus lines are in step with the segmentation bus 203b. At the end of each image frame, the feature bus 203d is then used by each of the feature modules in the feature extraction pipeline 130 in turn to send feature information to the uplink communication module 140.
- the operation of the data busses 203a-203d is further described with reference to the timing diagram in Fig. 9.
- the timing for transferring images from the CCD's 62 is shown by timing signal T CCD .
- the time to transfer an image, i.e. II, 12 or 13 (Fig. 5), is time tl .
- the input levelling module 150 After the first image is transferred to the image processing section (i.e. the high-speed receiver module 152), the input levelling module 150 performs the calculations for levelling the image II (12, 13) and puts the levelled image onto the video-out bus 205a for the segmentation pipeline 120.
- the time taken for the input levelling module to complete its operations is t2 and this time is less than tl. This difference provides a window, denoted as time t3, in the data streams where nothing needs to be transferred on the data busses 203a to 203d. (The interval t3 is approximately 15% of the duration of time t2. )
- the segmentation bus 203b carries the segmentation results and the feature bus 203d carries the smoothed image results.
- the segmentation pipeline 120 operation When the segmentation pipeline 120 operation is complete, the segmentation results together with the levelled and smoothed images are simultaneously placed on the respective S-bus 203b and L-bus 203c. During the timing window t3, there is no data being transferred, and therefore the results from the feature extraction pipeline 130 can be transmitted to the next stage of operation without interfering with the normal data flow. This amounts to a multiplexing of the features on the F- bus 203d.
- Each of the 26-bit busses 203a to 203d and 205a to 205b comprises 24 data bits and 2 synchronization bits.
- One synchronization bit is for the image frame and the other is for the image line.
- the frame synchronization signal is used as a data strobe signal and the line synchronization signal is used as a data direction, i.e. read/not write, signal.
- the backplane control bus B-bus 203a may be used to monitor the outputs of any stage of the segmentation pipeline 120 while the latter is in operation without interruption.
- the pipeline control CPU 110 and the input conditioning/levelling module 150 are for convenience carried on the same printed circuit card 204.
- the control CPU 110 is responsible for the control of the camera subsystem 52, the segmentation 120 and feature extraction 130 pipelines and the uplink transfer module 140.
- the control CPU 110 comprises a processor, boot memory, instruction and data memory, a control port, a backplane interface, and a watch-dog reset (not shown) .
- the control CPU 110 receives instructions (e.g. commands and initialization data) from the control computer 54 via the bi-directional interface 156 (Fig. 4), and returns status information. Following start-up, the control CPU 110 scans, initializes and tests the various elements of the pipeline processor 100 and returns the status information to the control computer 54. Only after successful completion of start-up will the control computer 54 instruct the pipeline processor 100 to begin image capture by the camera 52.
- the control CPU 110 is preferably implemented using a highly integrated RISC-based microcontroller which includes a variety of on-board support functions such as those for ROM and DRAM, the serial port, the parallel printer port, and a set of peripheral strobes.
- a suitable device for the control CPU is the AMD 29200 RISC microcontroller manufactured by Advanced Micro Designs, Inc., Sunnyvale, California.
- the fibre-optic bi-directional interface 156 to the control computer 54, the analog control interface 154 for the camera sub-system 52, the image capture (CCD) control and levelling circuitry, and the control bus interface for the backplane 202 are preferably implemented in a Field Programmable Gate Array (FPGA) which, in addition, controls the other FPGA' s in the system 50.
- FPGA Field Programmable Gate Array
- the watch-dog circuit is implemented as a single chip. The implementation of the FPGA' s and watch-dog circuit are within the knowledge of one skilled in the art.
- the control CPU 110 preferably integrates control of the boot memory (not shown) , the serial port 158, and the parallel port 160. In addition, the control CPU 110 decodes six strobe signals for the off-chip peripherals. In the preferred embodiment, the control CPU 110 is configured to use an 8-bit wide ROM (Read Only Memory) as the boot memory and is able to control this memory directly. A single SIMM (Single In-line Memory Module) , or dual-banked SIMM's, are used for instruction and data memory.
- the serial port 158 is pin compatible with a 9-pin PC serial port (additional serial control lines are connected to the on-chip I/O lines) .
- the control lines for the parallel port 160 are connected to a bi-directional latch (not shown) to drive a PC compatible 25-pin parallel port. Additional control lines may be handled by on-chip I/O lines.
- the control port drives two dual control channels. Each of these channels can be used serially, via the interface 154 (Fig. 4), to send data to an analog device such as the camera sub-system 52.
- the input conditioning module 150 is coupled between the high-speed receiver module 152 and the segmentation pipeline 120.
- the input conditioning circuit 150 first acquires the set of three raw images II, 12, 13 from the camera sub-system 52. To ensure high-speed operation, the camera data is transferred over a fibre-optical data link in the receiver module 152 (Fig. 4) which has the additional advantage of being relatively immune from electrical noise generated by the environment of the automated system 50.
- the principle function of the input conditioning module 150 is to condition the set of the three images II, 12, 13 before the images are passed to the segmentation pipeline 120.
- the conditioning involves correcting the images II, 12, 13 for local variations in illumination level across the visual field and for variations in the illumination intensity.
- the input module 152 comprises an input buffer 170, a levelling pipeline function 171, a background level buffer 172, a background level detector 173 and a focus calculator 174.
- the output of the levelling function pipeline 171 is coupled to the input of the segmentation pipeline 120.
- the output of the levelling pipeline 171 is also coupled to the F-bus 203d through a delay buffer 175 and a smoothing function filter 176.
- the output of the levelling pipeline 171 is also coupled to the Levelling bus 203c through the two delay buffers 175 and 177.
- Each of the three images II, 12, 13 in the set is stored in the input data buffer 170 ready to be processed by the levelling pipeline function 171 and the other computational tasks required by the system 50.
- a second set of fibre-optic links is used to receive image information from the camera sub-system 52 via the analog interface 154. While each image II, 12, 13 is received and stored, the histogram calculator 173 calculates background information and the focus calculator 174 calculates level information.
- the background levelling buffers 172 store a background correction for each of the three images II, I2 t 13. Once calculated the background correction data does not change and is called-for repeatedly for each of the new images that enters the input conditioning module 150.
- the control CPU 110 transmits the focus information to the control computer 54.
- the images are sent along with the background level values stored in the background buffer 172 to the levelling (i.e. normalization) pipeline function 171.
- the levelling pipeline function 171 levels the images II, 12, 13 to extract cytoplasm and nuclear binary maps.
- the Background illumination (flash) level detector 173 uses a histogram technique to find and measure the peak in the intensity of each image II, 12, 13. While the intrinsic background can be easily corrected in the pixel pipeline, the variations in the illumination levels of the stroboscopic flash in the camera sub-system 52 needs to be assessed and corrected so that the maximum dynamic range can be extracted from the levelled images. With the background level information, the images can be optimally corrected for variations in the stroboscopic flash level intensity.
- the histogram peak detect interface captures the most frequently occurring pixel input value for each image frame and each image channel. This information is used to level (normalize) the input images. In addition, this information is used in a feedback path to control and stabilize the flash intensity of the stroboscopic flash lamp via an analogue control line.
- the Focus calculator 174 is used to calculate the optimal focus position. While the optimal focus position is not generally required by the image processing routine, the focal position is useful at the opening phase of the specimen's analysis when the focal positions are not yet known. Thus, during this initial phase, the input conditioning module 150 performs the tasks of receiving the raw images II, 12, 13, levelling these images and then calculating the so-called focus number (based on a Laplacian measure of image sharpness) . This measure of focal correctness is returned to the control computer 54 to allow the optimal focal position to be discovered in a regular algorithm of motion and measurements.
- the levelling pipeline function 171 comprises a pipelined computational system that accepts single pixels for each of the three image channels and performs the levelling operations on them.
- the levelling pipeline 171 uses the raw image and the background correction data to correct the images for an intrinsic inhomogeneity associated with the imaging system 50. This is done by dividing the raw image pixel by the appropriate background image pixel and can thus be implemented in a single pixel pipeline architecture. This is implemented with FPGA' s (in conjunction with a look-up table for the divide operations) at the logical or gate level and as such comprises the first of the fine-grained pipelines in the processor 100.
- the levelled images, i.e. cytoplasm and nuclear binary maps, from the levelling pipeline 171 are then sent to the segmentation pipeline 120.
- frame synchronization and line synchronization signals are sent to the segmentation pipeline 120.
- the synchronization signals are provided to simplify the detection of the edges of the images II, 12, 13 for special handling.
- the first stage in the segmentation pipeline 120 is a Nuclear detect (NetCalc) function.
- This stage 122 (Fig.
- the neural network utilizes a neural-network based procedure for deciding whether an individual pixel is to be associated with a nuclear region or a cytoplasmic region.
- the neural network is implemented as a look-up table held in memory and accessed by the decoding of an address made up of pixel intensity values. This allows the neural network (or a scheme for this type of decision) to be rapidly updated and modified when needed and includes the possibility of a real-time adjustment of the nuclear detection function based on preliminary measurements of image quality and nature.
- the implementation of a neural network is described in co- pending application no. CA96/00619 filed on September 18, 1996 in the name of the common applicant.
- the next stage in the segmentation pipeline 120 comprises Sobel and cytoplasm threshold functions.
- the Sobel function comprises a known algorithmic technique for the detection of edges in grey-scale images.
- the Sobel function is required by the segmentation pipeline 120 to guide subsequent refinements of the segmentation.
- the Sobel function is implemented to process 3x3 blocks of pixels.
- the Cytoplasm detect function uses a threshold routine to distinguish, at a preliminary phase, the cytoplasmic regions from the background debris based on the integrated optical density.
- the levelled images from the levelling pipeline 171 also pass through the delay buffer 175.
- the delay buffer 175 delays levelled images to be held until the feature extraction pipeline 130 begins processing so that all of the images generated by the various pipeline operations will be present at the same time.
- the smoothing function filter 176 smooths the levelled images before they are outputted to the Feature bus 203d.
- the smoothing function 176 utilizes a standard image smoothing operation that requires blocks of 3x3 pixels implemented, again, in a wider pipeline.
- the smoothing operations are based on two different weighted averages of neighbouring pixels.
- another delay 177 is applied to the levelled images before being outputted on the Levelling bus 203c.
- the total delay along this path is set so that the images appearing on the L-bus 203c and the F-bus 203d are synchronized with the output of the segmentation pipeline 120.
- the result of the operation of the input conditioning module 150 is the output of binary images of preliminary nuclear positions and preliminary cytoplasm positions on the video-out bus 205a together with the smoothed results of the Sobel operations on the Feature data bus 203d. These three data streams are received by the next stage of the segmentation pipeline carried in modules on the Quad module card 209.
- the quad module card 208 is designed to carry up to four pipeline boards 209 for performing segmentation or feature extraction operations.
- the quad module card 208 is configured to provide line driving capabilities, time multiplexing of feature and control data busses, and power distribution. As described above in the discussion of the custom backplane, it consists of four parallel busses 203a to 203d and 2 serial buses 205a to 205b. The busses are driven by bus transceivers and the necessary logic is implemented in a small FPGA as will be within the understanding of those skilled in the art.
- the quad module card 208 is central to the general purpose design of the image processing system.
- the quad module card 208 allows various modules that implement segmentation and feature extraction operations in configurable hardware (i.e. FPGA's) to be arranged thereby providing flexibility in the operation of these configurable elements so as to improve the accuracy of a segmentation result or add additional features that may be required by the subsequent classification algorithms.
- configurable hardware i.e. FPGA's
- Each one of these filters 122 consumes a block of 3x3 pixels (of either 8 bits or one bit) fed to it after the preliminaries of levelling and input conditioning in general have been performed as described above.
- the order of operations within the segmentation pipeline 120 comprises a generalized noise reduction operation, followed by the labelling of the resultant cytoplasmic regions. This is followed by another generalized noise reduction operation, a subsequent nuclear region labelling, and final noise reduction before the results are presented to the S-bus 203b.
- Fig. 11 shows a filter stage 122 for the segmentation pipeline 120 in greater detail.
- the filter stage 122 does not comprise a microprocessor unit implementing some variety of software or a set of general-purpose adders. Such an implementation would represent a coarse-grained pipeline approach and would fail to properly exploit the power of this type of computing architecture.
- the pipeline processor 100 utilizes a fine-grained approach and accordingly each filter unit 122 comprises a number of elementary logic elements which are arranged in the form of the serial pipeline and perform their logical functions rapidly so as to quickly pass to the next block of data waiting in line.
- the filter stage 122 comprises a filter pipeline function module 180.
- the filter pipeline module 180 has a pixel stream input 181 and an output 182.
- the pixel stream is also coupled to an input 183 through a row delay buffer 184 and to another input 185 through an additional row delay buffer 186.
- the filter stage 122 includes a mask stream input 188, a frame synchronization input 189, and a line synchronization input 190.
- the frame synchronization is applied to another input 191 through a delay buffer 192, and the line synchronization 190 is applied to another input 193 through a delay buffer 194.
- the input pixel stream 181 is fed directly into the filter pipeline function 180. There is a latency amounting to two full image rows plus three pixels of the third row before the pipeline 180 can begin operations (corresponding to a 3x3 processed element) .
- the pixel stream 181 is delayed by the one row buffer 184 to fill in the second line of the pixel block and that same stream of pixels is further delayed by the row buffer 186 to fill the final row of the 3x3 block.
- the total in-out delay time (as opposed to the latency) for the pipeline is only one row and three clocks.
- the mask stream 188 held in memory is available for the logical functions.
- the frame 189 and line 190 synchronization signals together with the delayed inputs 191, 193 complete the inputs to the filter pipeline function 180.
- the noise reduction elements in the pipeline comprise a combination of specialized erosion and dilation operations.
- the principle result of these operations is the alteration of the state of the centre pixel in the 3x3 block based on the state of one or more of the neighbour pixels.
- erosion the pixel at the centre is turned “off” under the right neighbourhood condition.
- dilation operations the pixel is turned "on”.
- the dilation function works on a binary pixel map to fill in irregularities. A 3x3 matrix is examined to determine if the central pixel should be turned on based on the number of neighbours that are on (the "order") . If this pixel is already on it is left on.
- the erosion function reverses the action of the dilation function by restoring the boundary of a block of pixels to the original dimensions.
- a 3x3 matrix is examined to determine if the central pixel should be turned off based on the number of neighbours that are on (the "order") . If this pixel is already off it is left off.
- the dilate special function works on a source binary map and an edge binary map to fill in irregularities.
- a 3x3 matrix is examined to determine if the central pixel should be turned on. If this pixel is already on it is left on.
- the central pixel of the edge map enables an alternate rule for filling in the central pixel of the source map.
- the dilation not join function works on a binary pixel map to fill in irregularities while avoiding joining adjacent objects.
- the 3x3 input matrix and the 4 previously calculated result pixels are examined to determine if the central pixel result should be turned on. If this pixel is already on it is left on.
- the dilate special not join function works identically to the Dilate Not Join function with the addition of a mask bit.
- the central pixel of the mask map enables an alternate rule for filling in the central pixel of the source map.
- the dilate label not join function works on a source label map, the result label map and an edge map to fill in irregularities while avoiding joining adjacent objects.
- a 3x3 matrix of the source and result map is examined to determine if the central pixel should be turned on based on which of the neighbours are on. If this pixel is already non-zero or the edge map is zero its value is left unchanged.
- the central pixel of the mask map enables an alternate rule for filling in the central pixel of the source map.
- the following operations are implemented in the hardware as a part of the noise reduction scheme in the segmentation pipeline 120:
- Subadd2 Module - returns total of input bits as 0, 1 or 2 and greater.
- Subadd3 Module - returns total of input bits as 0, 1, 2 or 3 and greater.
- Subadd4 Module - returns total of input bits as
- Subsum3 Module - returns sum of 2 input numbers as 0, 1, 2 or 3 and more.
- Subsum6 Module - returns sum of 2 input numbers as 0, 1, 2, 3, 4, 5 or 6.
- Subjoin Module - returns join sub-sum for 1 edge and 1 corner.
- Join Module - returns true if dilation operation would join 2 regions.
- Orderl Module - returns true if 1 or more nearest neighbours are on.
- Order2 Module - returns true if 2 or more nearest neighbours are on.
- Order3 Module - returns true if 3 or more nearest neighbours are on.
- Order4 Module - returns true if 4 or more nearest neighbours are on.
- Order5 Module - returns true if 5 or more nearest neighbours are on.
- Order6 Module - returns true if 6 or more nearest neighbours are on.
- Order7 Module - returns true if 7 or more nearest neighbours are on.
- Order ⁇ Module - returns true if 8 nearest neighbours are on.
- the pipeline processor 100 proceeds to the detection of either cytoplasmic or nuclear material in the image I.
- the "detection" function is also implemented in the segmentation pipeline 120 and can comprise both nuclear and cytoplasm detection operations or only the nuclear detect operation.
- This module in the segmentation pipeline 120 receives the Sobel, NetCal and BinCyt bit streams from the input conditioning module 150 (as described above) over the video- in bus 205b. The signals are processed in parallel and fine grained pipelines to produce the UnfilteredNuc, BinNuc, NucPlus and BinCyt bit streams. These results from the segmentation pipeline 120 are then used in various phases of the feature extraction modules in the feature extraction pipeline 130 to calculate the feature sets which are then used in a subsequent image classification.
- the Primary Labelling operation comprises an operation in which a segmented region (either nuclear material or cytoplasmic material) is given a unique number within the image so that it may be later identified when the classification is complete. This is done before the feature extraction phase is begun so that feature extraction can be applied to the labelled and segmented objects and also since the location of disparate nuclei within any single cytoplasmic region can be an important feature in itself when attempting a classification of cytological material.
- a segmented region either nuclear material or cytoplasmic material
- This function can be either implemented at the gate level in a Field Programmable Gate Array, or alternatively application specific integrated circuits (ASIC) can be used.
- ASIC application specific integrated circuits
- the feature extraction pipeline 130 comprises a number of feature extraction modules 132 in parallel, shown individually as 130a, ... 130m in Fig. 4. Reference is next made to Fig. 12 which shows the feature extraction module 132 in more detail.
- the feature extraction module 132 comprises a feature calculator 210 and accumulator arrays 212 shown individually as 212a, 212b, 212c, 212d. One block of these accumulator arrays 212 is assigned to each feature and within each accumulator block, one accumulator is assigned to each label. In total, each block may be expected to hold in excess of 21,000 accumulators .
- the features that can be extracted fall into five general categories: (1) morphological features; (2) textural features; (3) colorometric features; (4) densitometric features; and (5) contextual features.
- Morphological features describe the general shape and size of the segmented objects.
- Textural features describe the distribution and inter-relation of light and dark levels within the segmented objects.
- Colorometric features pertain to the spectral properties of the segmented objects.
- Densitometric features describe the light intensities within the segmented objects.
- Contextual features establish the physical relationship between and among the segmented objects .
- the uplink transfer module 140 comprises an uplink buffer 141 and the uplink communications interface 142.
- the uplink buffer 141 stores the image data from the Levelled image bus 203c and the Segmentation bus 203b. Each image is written to a separate bank of memory in the buffer 141as follows: all three levelled images, cytoplasm labels, nuclear labels and all the binary images. Once an image is stored, the image banks can be transmitted on request. Following the end of each image frame, the feature information is input from the feature bus 203d on the frame 189 and line 190 synchronization signals. This data is written into the buffer memory 141 in the same block as the images. The feature memory start row and number of rows is used to determine when the feature storage is complete. When all the feature data from all the feature cards has been stored, this data is automatically transmitted to the host computer 56 via the fiber optic communication interface 142. The image data is transmitted upon a request by the host computer 56.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US3142396P | 1996-11-21 | 1996-11-21 | |
US31423P | 1996-11-21 | ||
PCT/CA1997/000878 WO1998022909A1 (fr) | 1996-11-21 | 1997-11-20 | Processeur pipeline destine a l'analyse d'images medicales et biologiques |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1025546A1 true EP1025546A1 (fr) | 2000-08-09 |
Family
ID=21859381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP97913051A Ceased EP1025546A1 (fr) | 1996-11-21 | 1997-11-20 | Processeur pipeline destine a l'analyse d'images medicales et biologiques |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1025546A1 (fr) |
JP (1) | JP2001503897A (fr) |
AU (1) | AU741705B2 (fr) |
CA (1) | CA2271868A1 (fr) |
WO (1) | WO1998022909A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7486384B2 (en) | 2003-03-31 | 2009-02-03 | Asml Netherlands B.V. | Lithographic support structure |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000227316A (ja) | 1999-02-04 | 2000-08-15 | Keyence Corp | 検査装置 |
US7889233B2 (en) * | 2005-08-26 | 2011-02-15 | Nvidia Corporation | Video image processing with remote diagnosis and programmable scripting |
WO2007077672A1 (fr) | 2005-12-28 | 2007-07-12 | Olympus Medical Systems Corp. | Dispositif de traitement d'image et procede de traitement d'image dans le premier |
KR100995127B1 (ko) | 2008-05-28 | 2010-11-18 | 한국생명공학연구원 | 정보 분석프로세스 자동 설계 구현시스템 및 방법 |
KR100992169B1 (ko) | 2008-07-25 | 2010-11-04 | 한국생명공학연구원 | 정보 분석프로세스 추천 설계시스템 및 방법 |
WO2011146994A2 (fr) * | 2010-05-28 | 2011-12-01 | Magnepath Pty Ltd. | Procédé, plate-forme et système d'analyse d'images médicales |
US10405535B2 (en) * | 2014-05-05 | 2019-09-10 | University Of Southern Queensland | Methods, systems and devices relating to real-time object identification |
CN114324927B (zh) * | 2021-12-30 | 2023-03-24 | 精匠诊断技术(江苏)有限公司 | 一种流水线启动方法、系统、电子设备及介质 |
-
1997
- 1997-11-20 EP EP97913051A patent/EP1025546A1/fr not_active Ceased
- 1997-11-20 CA CA002271868A patent/CA2271868A1/fr not_active Abandoned
- 1997-11-20 WO PCT/CA1997/000878 patent/WO1998022909A1/fr not_active Application Discontinuation
- 1997-11-20 JP JP52303598A patent/JP2001503897A/ja active Pending
- 1997-11-20 AU AU50449/98A patent/AU741705B2/en not_active Ceased
Non-Patent Citations (1)
Title |
---|
See references of WO9822909A1 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7486384B2 (en) | 2003-03-31 | 2009-02-03 | Asml Netherlands B.V. | Lithographic support structure |
Also Published As
Publication number | Publication date |
---|---|
CA2271868A1 (fr) | 1998-05-28 |
AU741705B2 (en) | 2001-12-06 |
JP2001503897A (ja) | 2001-03-21 |
AU5044998A (en) | 1998-06-10 |
WO1998022909A1 (fr) | 1998-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0628187B1 (fr) | Procede et appareil de traitement de sequences de donnees | |
US4860375A (en) | High speed cellular processing system | |
US5793899A (en) | Vision coprocessing | |
US7489834B2 (en) | Method and apparatus for image processing | |
US8712162B2 (en) | Interest point detection | |
US4689823A (en) | Digital image frame processor | |
JP2016192224A (ja) | 全体画像内で注意の焦点に関する画像データを処理するためのシステムと方法 | |
AU747283B2 (en) | Data processing system for logically adjacent data samples such as image data in a machine vision system | |
AU741705B2 (en) | Pipeline processor for medical and biological image analysis | |
WO1999001809A2 (fr) | Procede et appareil destine a une architecture de jeux d'instruciton reduite pour effectuer le traitement d'images a plusieurs dimensions | |
US5978498A (en) | Apparatus for automated identification of cell groupings on a biological specimen | |
US5528705A (en) | JPEG synchronization tag | |
WO1998052016A1 (fr) | Systeme d'imagerie multispectrale et methode de cytologie | |
Graham et al. | The diff3T. M. Analyzer: A Parallel/Serial Golay Image Processor | |
Shipley et al. | Processing and analysis of neuroanatomical images | |
US20120327260A1 (en) | Parallel operation histogramming device and microcomputer | |
EP0457547B1 (fr) | Dispositif et procédé de reconnaissance d'information | |
CN113822838A (zh) | 碱基识别设备及碱基识别方法 | |
CN111723638A (zh) | 电子图像处理设备 | |
US7148976B1 (en) | Image capture and processing system health check | |
EP1433120A1 (fr) | Architecture pour le traitement d'images d'empreintes digitales | |
Sim et al. | Fast line detection using major line removal morphological Hough transform | |
Obaid | Efficient Implementation Of Sobel Edge Detection With ZYNQ-7000 | |
Drayer et al. | A high performance Micro Channel interface for real-time industrial image processing applications | |
Lougheed | Application of parallel processing for automatic inspection of printed circuits |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 19990621 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE FR GB IT |
|
17Q | First examination report despatched |
Effective date: 20001012 |
|
GRAG | Despatch of communication of intention to grant |
Free format text: ORIGINAL CODE: EPIDOS AGRA |
|
RTI1 | Title (correction) |
Free format text: PIPELINE PROCESSOR FOR MEDICAL AND BIOLOGICAL IMAGE ANALYSIS |
|
RTI1 | Title (correction) |
Free format text: PIPELINE PROCESSOR FOR MEDICAL AND BIOLOGICAL IMAGE ANALYSIS |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: VERACEL INC. |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20020704 |