CA1328019C - Method and apparatus for generating a plurality of parameters of an object in a field of view - Google Patents
Method and apparatus for generating a plurality of parameters of an object in a field of viewInfo
- Publication number
- CA1328019C CA1328019C CA 574289 CA574289A CA1328019C CA 1328019 C CA1328019 C CA 1328019C CA 574289 CA574289 CA 574289 CA 574289 A CA574289 A CA 574289A CA 1328019 C CA1328019 C CA 1328019C
- Authority
- CA
- Canada
- Prior art keywords
- pixel
- representation
- value
- image
- location
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
Abstract of the Disclosure A method and an apparatus for generating a plurality of parameters of an object in a field of view is disclosed. An electrical image of the field of view is formed. The electrical image is processed to form a plurality of different representations of the electrical image where each different representation is a representation of a different parameter of the field of view. The positional information, which represents the boundary of the object, is generated. In response to the positional information that represents the boundary of the object being generated, corresponding locations in each of the different representations is traced. The different parameters from each of the different representations are calculated as the locations are traced in each of the different representations.
Description
~ 32~Ql9 PATENT
~lETHOD AND APPARATFUS FOR GENERATING A PLURALITY
~, OF ~A~AMETERS OF j~N 0~3JECT ~N A F;[ELD OF VIEW
, :
S Technical Field The present invention relates to a method and an apparatus for generating a plurality of parameters of an object in a field of view, and more particularly, to a method and apparatus where a plurality of parameters of an object in a field of view are determined in , response to positional information representing the boundary of the object being provided.
BackgFou~d ~t Microscopical image analy6is is well known in the 1 15 art. See, for example, U.S. Patent No. 4,097,845. The¦ purpose of image analysis is to determine particular qualities of objects in the field of view. In 1 ~articular, in the case of image analysis of I microscopical samples such as biopsies, blood or urine, it is highly desirable to determine properties of the particles in view such as: area, mass density, shape, ;
etc. However, in order to determine the particular parameter of the particles in the object of view, the boundary of the particles must first be located.
In U.S. Patent No. 4,097,845, a method of locating . '' ., ~
. ,. ~ , the boundary of particles is described using the technique of "neighbors of the neighb~rs".
U.S. Patent No. 4,060,713 also discloses an ~pparatus for processing two-dimensional data. In that reference, an analysis of the six nearest neighbors of an element is made.
U.S. Patent No. 4,53~,299 discloses yet another method for locating the boundary of a particle in the field of view.
~ 10 A urinalysis machine manufactured and sold by i International Remote Imaging Systems, Inc. under the trademark the Yellow IRIS~, has used the teaching of the '299 patent to locate the boundary of a particle and thereafter to determine the area of the particle.
However, The Yellow IRIS~ used the positional , information of the boundary of a particle to determine only a single parameter of the particle. Further, The ! Yellow IRIS~ did not generate a representation that is a representation of the parameter of area ln the field of view, which is separate and apart from the image 7 containing the representation that has the boundary of I the particle.
:
Pry of--~he-Invention A method and apparatus is provided for generating a plurality of parameters of an object in a f ield of view. The apparatus has imaging means for forming an ~;
.
~' ~
~lETHOD AND APPARATFUS FOR GENERATING A PLURALITY
~, OF ~A~AMETERS OF j~N 0~3JECT ~N A F;[ELD OF VIEW
, :
S Technical Field The present invention relates to a method and an apparatus for generating a plurality of parameters of an object in a field of view, and more particularly, to a method and apparatus where a plurality of parameters of an object in a field of view are determined in , response to positional information representing the boundary of the object being provided.
BackgFou~d ~t Microscopical image analy6is is well known in the 1 15 art. See, for example, U.S. Patent No. 4,097,845. The¦ purpose of image analysis is to determine particular qualities of objects in the field of view. In 1 ~articular, in the case of image analysis of I microscopical samples such as biopsies, blood or urine, it is highly desirable to determine properties of the particles in view such as: area, mass density, shape, ;
etc. However, in order to determine the particular parameter of the particles in the object of view, the boundary of the particles must first be located.
In U.S. Patent No. 4,097,845, a method of locating . '' ., ~
. ,. ~ , the boundary of particles is described using the technique of "neighbors of the neighb~rs".
U.S. Patent No. 4,060,713 also discloses an ~pparatus for processing two-dimensional data. In that reference, an analysis of the six nearest neighbors of an element is made.
U.S. Patent No. 4,53~,299 discloses yet another method for locating the boundary of a particle in the field of view.
~ 10 A urinalysis machine manufactured and sold by i International Remote Imaging Systems, Inc. under the trademark the Yellow IRIS~, has used the teaching of the '299 patent to locate the boundary of a particle and thereafter to determine the area of the particle.
However, The Yellow IRIS~ used the positional , information of the boundary of a particle to determine only a single parameter of the particle. Further, The ! Yellow IRIS~ did not generate a representation that is a representation of the parameter of area ln the field of view, which is separate and apart from the image 7 containing the representation that has the boundary of I the particle.
:
Pry of--~he-Invention A method and apparatus is provided for generating a plurality of parameters of an object in a f ield of view. The apparatus has imaging means for forming an ~;
.
~' ~
.
electrical image of the field of view. Means is provided for segmenting the electrical image to form a - plurality of different representations of the electrical image wherein each different representation is a representation of a different parameter of the field of view. Generating means provides the po~itional information that represent the boundary of the object. Tracing means locates positions in each of the different representations in response to the positional information generated. Finally, calculating means provides the different parameters from each of the different representations based upon location traced in each of the different representations.
Brief Description of the Dr3~wings Figure 1 is a schematic block diagram of an imaging ystem of the present invention.
Figure 2 i8 a block diagram of the video image ! processor of the imaging system of the present invention, shown with a plurality of modules, and a plurality of data buses.
i Figure 3 is a schematic block diagram of the portion of each module of the video image processor with communication means, and logic control means to interconnect one or more of the data buses to the module.
' _ ~ 13~8019 .~ .
electrical image of the field of view. Means is provided for segmenting the electrical image to form a - plurality of different representations of the electrical image wherein each different representation is a representation of a different parameter of the field of view. Generating means provides the po~itional information that represent the boundary of the object. Tracing means locates positions in each of the different representations in response to the positional information generated. Finally, calculating means provides the different parameters from each of the different representations based upon location traced in each of the different representations.
Brief Description of the Dr3~wings Figure 1 is a schematic block diagram of an imaging ystem of the present invention.
Figure 2 i8 a block diagram of the video image ! processor of the imaging system of the present invention, shown with a plurality of modules, and a plurality of data buses.
i Figure 3 is a schematic block diagram of the portion of each module of the video image processor with communication means, and logic control means to interconnect one or more of the data buses to the module.
' _ ~ 13~8019 .~ .
. Figure 4 is a detail circuit diagram of one implementation of the logic unit shown in Figure 3.
~ igures 5(a-c) are schematic block diagrams of ~arious possible configurations connecting the modules .; 5 to the data buses.
Figure 6 is a schematic block diagram of another embodiment of a video image processor shown with a , plurality of data buses which can be electronically 7 switched.
~igure 7 i6 a 6chemat~c block diagram of the portion of the video image processor shown in Figure 6 showing the logic unit and address decode unit and the switching means to electronically switch the data buses of the video image processor 6hown in Figure 6.
,l 15 Figure 8(a-c) show various possible embodiments as a result oi switching the data buses of the video image ~ processor shown in Figure 6.
.; Figure 9 i6 a detail circuit diagram of a portion of the 6witch and logic unit of the video image proc~ssor 6hown in Figure 6.
j Figure 10 is a schematic block diagram of the video processor module oi the video image processor ~, shown in Figures 2 or 6~
., Figure 11 is a schema~ic block diagram of an image . .
I 25 memory module of the video image processor shown in I Figures 2 or 6.
s . ~ ' : 1328019 ;,~ Figure 12 is a schematic block diagram of a morphological processor ~odule of the video image processor shown in Figures 2 or 6.
;i, Figure 13 i5 a graphic controller module of the S video image processor 6hown in Figures 2 or 6.
Figure 14 i6 a block schematic diagram of the master controller of the video image processor shown in Figures 2 or 6.
Figure 15 is a circuit diagram of another implementation of a logic unit.
Figure 16 is an example of a digitized image of the field of view with a particle contained therein.
Figure 17 is an example of the electrical image shown in Figure 16 processed to form a representation lS of the electrical image which i8 a repre~entation of the area of the field of view.
Flgure 18 i8 an example of the electrical image ~hown in Figure 16 processed to form a representation ~! of the electrical image which is a representation of the in~egrated optical density of the field of view.
Figure 19 i6 an example of the electrical ~mage ¦ shown in ~igure 16 processed to form a first 'I representation containing the boundary of the ob~ect in ;j the field of view, in accordance with the method as ~ 25 disclosed in U.S. Patent No. 4,538,299.
-13~8019 " --6--.
Figure 20 i6 an example of the calculation of the area of the object in the field of view of the example hown in Figure 16 in accordance with the ~et~od of the present invention.
Figure 21 is the calculation of the integrated optical density of the object in the field of view of ; the example shown in Figure 16 in accordance with the method of the present invention.
Detailed Description of the Drawinas Referrinq to Figure 1 there is shown an imaging system 8 of the present invention. The imaging system 8 comprises a video image processor 10, which receives analog video signals from a color camera 12. The color camera 12 i6 optically attached to a fluorescent $11uminator 14 which is focused through a microscope 16 and i8 directed at a stage 18. A source of illumination 20 provides the necessary electromagnetic ~-j radiation. The video imaging processor 10 communicates with a hoct computer 22. In addition, the host computer 22 has software 24 stored therein to operate lt. Finally, a full color monitor display device 26 receives the output of the video image processor 10.
There are many uses for the video image processor 10. In the embodiment shown in Figure 1, the imaging ~ystem 8 is used to analyze biological specimens, such ! ~s biopsy material, or constituents of blood. The ,' :;`
13280~9 ~" .
. .
; biological ~pecimen is mounted 9n a Glide and is placed -' on the stage 18. The video image of the slide as taken ; by the color camera 12 through the ~icroscope 16 is processed by the video image processor.
In preferred embodiment, the host computer 22 is a Motorola 68000 microprocessor and communicate~ with the ~' video image processor 10 via a Q-bus. The Q-bus is a standard comm,unication protocol developed by Digital ;~ Equipment Corporation.
As shown in Figure 2, the video image processor 10 comprises a master controller 30 and a plurality of electronic digital modules. Shown in Figure 2 are a plurality of processor modules: the video processor 34, graphic controller processor 36, morphological processor 40, and a plurality of image memory modules:
image memory modules 38a, 38b and 38c. The image memory modules store data which is representative of j the video images. The processor modules process the sl data or the video images. The master controller 30 communicates with each one of the plurality of digital modules (34, 36, 38 and 40) via a control bus 32. In ~dd~tion, the plurality of digital modules (34, 36, 38 and 40) co~municate with one another via a plurality of data buses 42.
In the video image processor 10, the master controller 30 controls the operation of each one of the '' .
. , .
13~8~19 .
plurality of digital modules (34, 36, 38 and 40j by passing control signals along the control bus 32. The bus 32 comprises a plurality of lines. The bus 32 comprises 8 bit lines for address, 16 bit lines for data, 4 bit lines of control, one line for vertical sync and one line for horizontal sync. In addition, there are numerous power and ground lines. The 4 bits of control include a signal for clock, ADAV, CMD, and 'l WRT ~the function of these control signals will be ~ 10 described later).
; The plurality of data buses 42, which interconnect the modules (34, 36, 38 and 40) with one another, comprise nine 8 bit wide data buses 42. The nine data buses 42 are desiqnated as 42A, 42B, 42C, 42D, 42E, 42F, 42G, 42H, and 42I, respectively.
¦ Within each module (34, 36, 38 and 40) is a ! communication means 54. Further, within each module is a logic unit means 52 which is responsive to the control signals on the control bus 32 for connecting ¦ 20 the communication means 54 of each module to one or more o~ the data buses 42.
Referr$ng to Figure 3 there is shown a schematic block diagram of the portion of each of the modules which is responsive to the control signals on the control bus 32 for interconnecting one or more of the data buses 42 to the communication means 54 within each '.
5, of the modules. Shown in Figure 3 is an address decode circuit 50. Th~ address decode circuit 50 is connected v to the eight ~ddress lines of the control bus 32. The ~ddress decode circuit 50 also outputs a signal 56 which activates its associated logic unit 520 Since each logic unit 52 has a unique ~ddress, if the address lines present on ths ~ddre6s decode 50 matches the address for that particular logic unit 52, the address decode 50 would send a signal 56 to activate that logic unit 52. Within each module, there can be a plurality ¦ of logic units 52 each with an associated address decoder 50. Each of the plurality of logic units 52 can perform different tasks.
~ The logic unit 52 receives the 16 bits of data 3 15 ~rom the 16 bits of data portion of the control bus 32.
~ In addition, the logic unit 52 can also be connected to ! the four control lines: clock, ADAV, CMD, WRT, as , previously described, of the control bus 32, and vertical sync and horizontal sync. The logic unit 52 ,1 20 will then control the operation of a plurality of tri-~ state transceivers S4A, ~4B, 54C, 54D, 54E, 54F, 54G:! and 54I. Thie being understood that there are eight individual tri-state transceivers 54 for the group of ~l tri-~tate transceivers 54A, and eight individual tri-state transceivers for the group of tri-state , transceivers 54B, etc. The function of the tri-state ,, .
'.
:, 1~280~9 , . ~10--transceivers 54 is to connect one or more of the data buses 42A to functions within the nodule of which the logic unit 52 and address decode circuit 50 is a part thereof. In addition, within the module, a cross-point switch 58 may be connected to all of the outputs of the tri-state transceivers 54 and ~nultiplex the lurality of tri-state transceivers 54 lnto a single 8 bit wide bus 60. -.j Referring to Figure 4 there is shown a simplistic example of the address decoder 50, the logic unit 52, and one of the group of transceivers 54A
interconnecting with the bus 42A. As previously stated, the eight address signal line~ of the control `~ bus 32 are supplied to the address decoder 50. If the addre3s supplied on the address lines of the control ~ bus 32 correctly decodes to the address of the 'ogic ;, unit 52, the address decoder 50 sends a signal 56 going .¦ high which is supplied to the logic unit 52. The address decode circuit 50 can be of conventional .~ 20 design.
.I Logic unit 52 comprises two AND gates 62A and 62B
whose outputs are connected to J-K flipflop 64a and 64B
,l respectively. The AND gates 62A and 62B receive at one of the inputs thereof the control signal 56 from the address decoder 50. The other input to the AND gates 62A and 62B are from the data lines of the control bus , ' ' . ' ,. . " ' .
., ~,. .
',~
32. If the address decoder 50 determines that the logic unit 52 is to be activated, as determined by the correct address on the address lines of the control bus 32, the control ~ignal 56 going high gates in to the flipflop 64A and 64B the data present on the data lines of the control bus 32. The output of the 3-X flipflop 64A and 64B are used to control the eight tri-state transceivers 54Ao...54A7. Each of the eight tri-state transceivers has as one terminal thereof connected to '~ 10 one of the eight bit communication paths of the bus .~1 42A. The other terminal of each of the tri-state , tran6ceivers 54A is connected to electronic elements ; within th~ module.
The tri-state transceivers 54A, as the name ~j 15 suggests, has three states. The transceivers 54A can ~, provide communication to the data bus 42A. The tri-state transceivers 54A can provide data communication :l from the data bus 42A. In addition, the tri-state transceivers 54A can be in the open position in which case no co~munication occurs to or from the data bus 42A. As an example, the tri-state transceivers 54A are components manufactured by Texas Instru~ents designated as 74AS620. These tri-state transceivers 54A receive two inputs. If the inputs have the combination of 0 and 1, they denote communication in one direction. If the tri-state transceivers receive the inputs of 1 and 13280~ 9 .
:, O, they denote com~unication in the opposite direction.
If the tri-state transceivers 54A receive O O on both ` input lines, then the tri-~tate transceivers 54A are in r the open position. Since the tri-state transce.vers s4Ao . . . 54A7 are all switched in the same manner, i.e.
i either all eight lines are connected to the data bus .
42A, or they are not, the output of the flipflop 64A
and 64~i are used to control all eight transceivers to s interconnect one of the data buses. The logic unit 52 can also comprise other flipflops and control gates to control other tri-state transceivers which are grouped in groups of eight to gang the switching of the selection o~ connection to one or more of the other . data buses 42.
. 15 Because the interconnection~of one or more of the ¦ data buses 42 to one or more of the plurality of modules (34, 36, 38 and 40), is under the control of i the control bus 32, the data paths for the connection of the data buses 42 (A-I) can be dynamically , 20 reconfigured.
Referring to Figure 5a, there is shown one ;
possible configuration with the dynamically reconfigurable data buses 42. Since each data bus 42 i6 8 bits wide, the plurality of modules (34, 36, 38 and 40) can be connected to receive data from two data buses (e.g. 42~ and 42B), simultaneously. ~his is data .~.
,' '. .
:
processing in the parallel mode in which 16 bits of data are siimultaneously processed along the data bus.
Thus, the data buses 42 can be ganged together to increase the bandwidth of data transmission.
s 5 Referring to Figure 5b, there is another possible configuration for the data buses 42. In this mode of operat~on, module 34 can transmit data on data bus 42A
to module 36. Module 36 can communicate data with module 38 along the data bus 42B. Finally, module 38 -~ 10 can communicate with module 40 along the data bus 42C.
"
, In this mode, which i8 termed pipeline processing, data 3 can flow from one module to another sequentially or simultaneously since data is flowing on separate and l unique data buses.
Referring to Figure 5c, there is shown yet another 3 possible configuration for the data bus 42. In this mode the operation is known as macro interleaving. If, for example, the module 34 is able to process or transmit data faster than the modules 36 or 38 can i 20 receive them, module 34 can send every odd data byte to module 36 along data bus 42A and every even data byte along bus 42~ to the module 38. In this manner, data can be stored or processed at the rate of the fastest module. This is unlike the prior art where a plurality of modules must be operated at ~he speed of the slowest module.
Thus, as can be seen by examples shown in Figures 4a-4c, with a dynamically reconfigurable data bus .. . .. .
structure, a variety of data transmission paths, . including but not limited to those shown in Figures 4(a-c), can be dynamically and electronically ;. reconfigured.
3 Referring to Figure 6, there i8 ~hown yet another embodiment of a video image processor 110. The video image processor 110, similar to th~ video image processor 10 comprises a master controller 130 and a ~ plurality of digital modules 134, 136 (not shown), 138 q (A-B) and 140. These modules, similar to the modules 34, 36, 38 and 40, perform the respective tasks of image processing and image storing. The master .
controller 130 communicates with each one of the , modules via a control bus 132. Each one of the modules ~j 134-140 i5 also connected to one another by a plurality .l of data buses 42A-42I. Similar to the video image processor 10, there are nine data buses, each bus being 8 bits ~ide.
l ~he only difference between the video image ;l processor 110 and the video image processor 10 is that .~ along each of the data buses 42 is interposed a switch means 154 controlled by a logic unit 152 which is . 25 activated by an address decode circuit 150. This is shown in greater detail in Figures 7 and 9. As can be :
'~:
:;
~: -15-iieen in Figure 6, the switch m~ans ls4A...154I are interposed between the image memory module 138A and image memory module 138B. That is, the switch means 154A...154I, divide the data buses 42A...42I into two sections: the first section comprising of the video processor module 134 and the image memory module 138A;
the second part comprising the morphological processor 140 and the second image memory module 138B. The switch means lS4 provide the capability of either ~ 10 connecting one part of the data bus 42A to the other ;~ part or leaving the data bus open, i.e. the data bus sever2d.
Referring to Figures 8a-8c, there is shown various configurations of the possible data bus structure that results from using the switch means 154A-154I.
Figure 8a shows nine data buses 42A-42I, wherein the switch means 154A, 154B and 154C connect the data ¦ buses 42A, 42B and 42C into one c~ntinuous data bus.
However, the switch means 154D...154I are left in the open position thereby severing the data buses 42D...42I
into two portions. In this mode of operation, parallel processing can occur simultaneously using the data buses 42D...42I by the modules 134, and 138 and by the ~odules 138 and 140. In addition, serial or pipeline processing can occur along the data buses 42A...42C.
As before, with the switch means 154A...154I, `- 1328019 ~
dynamically ~electable, total parallel prooessing as ; shown in Figure ~b or total pipeline processing as ;`. shown in Figure ~c are also possible. In addition, of course other configurations including but not limited to the macro interleave configuration of Figure 5c, are alio possible.
: Referring to Figure 7, there is sihown a schematic block diagram of the electronic circuits used to control the data 42A...421 of the video image processor 110. As previously stated, a switch means 154 is interposed between two halves of each data bus 42.
Shown in Figure 7, i8 the switch means 154A interposed ! in the data bus 42A and the switch means 154I
¦ interposed in the data bus 42I. Each one of the switch . 15 means 154 is controlled by the logic unit 152 which is . activated by the address decode circuit 150. Similar . to the address decode circuit 50, the address decode .150 is connected to the eight address lines of the :.
control bus 132. If the correct address is detected, the control isignal 156 is sent to the logic unit 152.
The control signal 156 activates the logic unit 152 which in turn activates one or more of the switch means 154.
Referring to Figure 9, there is shown a detailed simplistic schematic circuit diagram of the logic unit 152 and the switch means 154A. As can be seen, the 1~2~019 logic 152 is identical to the logic unit 52. The - switch means 154 (a tri-state transceiver) interconnects one half of one of the bus lines to the other half of the bus line 42. In all other respects, the operation of the switch means 154, logic unit 152, and the address decode circuit 150 is identical to that shown and described for the address decode circuit 50, ; logic unit 52, and switch means 54.
As previously stated, the reconfigurable data buses 42, interconnect the plurality of modules (34, 36, 38 and 40) to one another. The modules comprise a plurality of processor modules and a plurality of 3 memory modules. With the exception of the ; communication means, logic unit and address decode circuit, the rest of the electronic circuits of each module for processing or storing data can be of co~ventional design. one of the processor modules 34 is the video processor module.
~he ~ideo processor module 3~ is shown in block ~ 20 diagram form in Figure 10. The video processor 34 i receives three analog video signals from the color camera 12. The three analog video ~ignals comprising signals representative of the red, green, and blue images, are processed by a DC restoration analog circuit 60. Each of the xesultant signals is then digitized by a digitizer 62. Each of the three -~8-digitized video signals is the analog video ~ignal from the colsr camera 12, segmented to form a plurality of -~ image pixels and with each image pixel digitized toform a greyscale value of 8 bits. The digitized video ; 5 signal~ are supplied to a 6x6 cross-point matrix switch i 64 which outputs the three digitized video signals onto three of the ~ix data buses (42A-42F).
From the data buses 42A-42F, the digitized video signals can be stored in one or more of the image memory modules 38A-38C. The selection of a particular ~,~ image memory module 38A-38C to store the digitized video signals is accomplished by the address decode ¦ circuit 50 connected to the logic unit 52 which activates the particular tri-state tran6ceivers 54, all ¦ 15 as prevLously described. The dat~a selection of which data bus 42 the digitized video images would be Qent to is based upon registers in the logic unit 52 which are set by the control bus 32.
~, Each of the memory modules 38 contains three megabytes of memory. The three megabytes of memory is ~urther divided into three memory planes: an upper ! plane, a middle plane, and a lower plane. Each plane o~ memory comprises 512 x 204~ bytes of memory. Thus, there is approximately one megabyte of memory per memory plane.
, '' -1~28~19 , Since each digitized video image is stsred in a memory ~pace of 256 x 256 bytes, each memory plane has room for 16 video images. In total, a memory module has room for the ~torage of 48 video images. The ; 5 address of the selection of the particular video image ,., from the particular memory plane within each memory module is supplied along the control ~us 32. As the data is supplied to or received from each memory module 38, via the data buses 42, it is supplied to or from the locations specified by the address set on the control bus 32. The three digitized video images from the video processor 34 are stored, in general, in the same address location within each one of the memory planes of each memory module.
Thus, the digital video signal representative of the red video image may be stored in the starting address location of x=256, y=0 of the upper memory plane: the digitized signal representative of the blue video image may be stored in x=256, y=0 of the middle ~emory plane; and the digital video signal representative of the green video image may be stored ; in x-256, ys0 of the lower memory plane.
Once the digital video signals r~presentative of the digitized video images are stored in the memory ZS planes of sne or more memory modulez 38, the digitized . ~.
., .
. ~ .
132801~ :
, . ..
video images are operated upon by the morphological processor 40.
The morphological processor 40 receives data from ,i. the data buses 42A-42D and outputs data to the data , 5 buses 42E-42G. Further, the morphological processor 40 can receive input or output data to and from the data buses 42H and 42I. Referring to Figure 12, there is shown a gchematic block diagram of the morphological processor 40. The morphological processor 40 receives . 10 data from data buses 42A and 42B which are supplied to 'i~ a multiplexer/logarithmic unit 70. The output of the multiplexer/logarithmic unit 70 (16 bits) iB either data from the data buses 42A and 42B or i8 the logarithm thereof. The output of the multiplexer/logarithmic unit 70 is supplied as the . input to the ALU 7~, on the input port designated as b.
The ALU 72 has two input ports: a and b.
The morphological processor 40 also comprises a . multiplier accumulator 74. The multiplier accumulator : 20 74 receives data from the data buses 42C and 42D andfrom the data buses 42H and 42I respectively, and ,~ performs the operation of multiply and accumulate .~ thereon. The multiplier accumulator ~4 can perform the ;I functions of 1) multiplying the data from (data bus 42C
or data bus 42D) by the data from (data bus 42H or data bus 42I~: or 2) multiplying the data from (data bus 42C
'I :
.. ~. ... . . . . . . .. . . .
or data bus 42D) by a constant as supplied from the master controller. ~he reæult of that calculation is outputted onto the data bu~es 42I, 42H and 42G. The result of the multiply accumulate unit 74 is that it calculates a Green'~ function kernel in realtime. The Green's function kernel is a summation of all the pixel values from the start of the horizontal ~ync to the then current pixel. This would be used subsequently in calculation of other properties of the image.
A portion of the result of the multiplier ... .
accumulator 72 (16 bits) is also inputted into the ALU
72, on the input port designated as a. The multiplier accumulator 74 can perform calculations of multiply and accumulate that are 32 bits in precision. The result of tha multiplier accumulator 74 can be switched by the multiplier accumulator 74 to be the most significant 16 ¦ bits or the least significant 16 bits, and is supplied ~ to the a input of the ALU 72.
;~ The output of the ALU 72 is supplied to a barrel shifter 76 which is then supplied to a look-up table 78 and i8 placed back on the data buses 42E and 42F. The ~ output o~ the ALU 72 is also supplied to a prime ! generator 80 which can also be placed back onto the data buses 42E and 42F. The function of the prime generator 80 is to determine the boundary pixels, as described in U.S. Patent No. 4,538,299.
: ' 132~019 ~ The ALU 72 can also perform the function of ; subtracting data on the input port a from data on the input port b. The result of the subtraction is an ~ overflow or underflow condition, which determines a>b ;` 5 or a<b. Thus, the pixel-by-pixel ~aximum and minimum for two images can be calculated.
Finally, the ALU 72 can perform histogram calculations. There are two types of histogram calculation. In the first type, the value of a pixel (a pixel value is 8 bits or is between 0-255), selects the address of the memory 73. The memory location at the selected address is incremented by 1. In the second type, two pixel values are provided: a first pixel value Or the current pixel location and a second pixel value at the pixel location of a previous line to ~ the immediate left or to the immediate right (i.e.
7 diagonal neighbor). The pairs of pixel values are used to address a 64K memory (256 x 256) and the memory ¦ location of the selected pixel is incremented. Thus, this histogram i8 texture related.
In summary, the morphological processor 40 can perform the functions of addition, multiplication, l multiplication with a constant, summation o~ a line, j finding the pixel-by-pixel minimum and maximum for two ;! 25 images, prime generation, and also histogram I calculation. The r~sults of the morphological ;. :
':
1328~19 processor 40 are sent along the data buses 42 and 6tored in the image memory modules 38. The ALU 72 can be a ~tandard 181 type, e~g. Texas Instruments part #
ALS181. The multiplier accumulator 74 can be of conventional design, such as Weitech WTL2245.
Referring to Figure 13, there is shown the graphic controller processor 36, in chematic block diagram form. The function of the ~raphic controller 36 ~s to receive processed digitized video images from the memory modules 38, graphic data, and alphanumeric data 1~ .
and combine them for output. The data from the control 3 bus 32 is supplied to an Advanced CRT controller 84.
The C~T controller is a part made by Hitachi, part number HD 63484 . The output of the advance CRT
controller 84 controls a frame buffer 80. Stored within the frame buffer 80 are the graphics and alphanumeric data. The video images from the data buses 42A-42F are also supplied to the graphics controller processor 36. One of the data buses 42 is selected and that combined with the output of the frame buffer 80 is supplied to a look-up table 82. The j output of look-up table 82 is then supplied as the I output to one of the data buses 42G, 42H or 42I. The I function of the graphics control processor 36 is to overlay video alpha and graphics information and then through a D to A to converter 86 is supplied to the -~
:' ~328019 monitor 26. In addition, the digital overlayed image can also be stored in one of the image memory modules . 38.
The image which is received by the graphics ~; 5 control processor 36 from one of the image memory ~ modules 38 i8 through one of the data buses 42A-42F.
5~ The control signals along the control bus 32 specifies ,~
to the image memory module 36 the starting address, the ~, x and y offset with regard to vertical sync as to when the data from the ima~e memory within that memory module 38 is to be outputted onto the data buses 42A-42F. Thus, split ~creen images can be displayed on the display monitor 26.
The master controller 30, as previously stated, communicates with the host computer 22 via a Q-bus.
. The master controller 30 receives address and data information from the host computer 22 and produces a 64 bit microcode. The 64 bit microcode can be from the writable control store location of the host computer 22 . 20 and i8 stored in WCS 90 or it can be from the proxy prom 92. The control program within the proxy pro~ 92 : iB used upon power up as WCS 90 contains volatile RAM.
: The 64 bit microcode is processed by the 29116 ALU 94 of the master controller 30. The master controller 30 is of the Harvard architecture in that separate memory exists for instruction as well as for data. Thus, the ~,`
., .
^*~, ~ :s~,~, f 1328~9 ~'``
: -2S-processor 94 can get instruction and data ,~ 6imultaneously. In addition, the master controller 30 ;. comprises a background ~equencer 96 and a foreground equencer 98 to sequence serie~ of program instruction 6tored in the writable control storage 90 or the proxy prom 92. The Q-bus memory map from which the master controller 30 receives its writable control store and its program memory is shown as below:
, ~ .
: ADDRESS (HEXADECIMAL) Use 103FFFFF) BS7 (Block 7 ) conventional Digital ~ ) Eqipment Corp.
:,~ 3FE000) nonemclature) 3FDFFF) Scratch Pad 153FA000) 387FFF) Writable Control 380000) Store 37FFFF) Image Memory 3 280000) Window 20lFFFFF) Host Computer 0 ) Program Memory , ., . . "~
. ~ ~
':
;.
In addition, the control signals ADAV, CMD and WRT
have the following uses.
~; CONTROL SIG~ALS Use A~AV CMD WRT
0 X X Quiescent Bus 1 1 0 Read Register 1 1 1 Write Register ~; 1 0 0 Read Image Memory 1 0 1 Write Image Memory The ~aster controller 30 operates synchronously with each one of the modules 34, 36, 38 and 40 and asynchronously with the host ~omputer 22. The clock signal is generated by the master controller 30 and is ~ent to every one of the modules 34, 36, 38 and 40. In addition, the master controller 30 starts the operation of the entire sequence of video image processing and video i~age storing upon the start of vertical sync.
Thus, one of the signals to each of the logic units 52 is a vertical sync signal. In addition, horizontal sync signals may be ~upplied to each one of the logic units.
The logic unit6 may al60 contain logic memory elements that switch their respective tri-state transceivers at prescribed times wi~h respect to the horizontal sync and the vertical sync signals.
! Referring to Figure 15, there is shown a schematic :~
~ . .
~ ' ;~ 1328019 diagram of another embodiment of a logic unit 252. The logic unit 252 i8 connected to a first address decode circuit 250 and a second address decode circuit ~51.
The logic unit 252 comprises a first AND gate 2S4, a second AND gate 256, a counter 258 and a vertical ~ync register 260.
Prior to the operation of the logic unit 252, first address decode circuit 2S0 is activated loading '7 the data from the date lines of the control bus 32 into the counter 258.
Thereafter, when the second address decode circuit 251 is activated, and vertical 6ync signal is received, the counter 258 counts down from each clock pulse received. When the counter 258 reaches zero, the tri-state register6 64a and 64b are activated.
It should be emphasized that the master controller 30, each one of the processing modules 34, 36, 38 and 40 and each one of the image memory modules 38 can be of conventional design. The master controller 30 controls the operation of each one of the modules along a separate control bus 32. Further, each of the modules communicates with one another by a plurality of d~ta buses 42. The interconnection of each one of the ~odules (34-40) with one or more of the data buses 42 ¦ 25 is accomplished by means within the module (34-40) which i9 controlled by the control signals along the il .:
.. . .
control bus 32. The interconnection of the data buses ~- 42 to the electronic function within each of the modules is as previously described. However, the electronic function within each of the modules, such as memory storage or processing can be of conventional 3 architecture and design.
In the apparatu~ 8 of the present invention, an image of the field of view as seen through the microscope 16 is captured by the color camera 12. The color camera 12 converts the image in the field of view into an electrical image of the field of view. In reality, three electrical images are converted. The electrical images fxom the color camera 12 are processed by the image processor 10 to form a plurality of different representations of the electrical image.
Each different representation is a representation of a different parameter of the field of view. One representation ic the area of interest. Another representation is the integrated optical density.
; 20 Referring to Figure 16 there is shown an example of the digitized electrical signal representative of 7 the electrical image of the field of ~iew. The ¦ digitized image shown in Figure 16 is the result of the output of the video processor module 34, which segments , 25 and digitizes the analog signal from the color camera ¦ 12. (For the purpose of this discussion, only one ,.i - .
` ' ,~, ..... , , .;, , j, 13280~9 :.~
electrical image of the field of view will be ; discussed. However, it is readily understood that there are three video images, one for each color r component of the field of view.) As shown in Figure 16, each pixel point has a certain amplitude representing the greyscale value. The ob;ect in the field of view is located within the area identified by the line 200. Line 200 encloses the object in the field of view.
As previously stated, the image processor 10 and more particularly the morphological processor module 40 processes the diqitized video image to form a plurality of different processed digitized video images, with each different processed digitized video image being a different representation of the digitized video image.
? One representation of the electrical image shown in Figure 16 is shown in Figure 17. This is the representation which represents a Green's function ¦ kernel for the area of the image in the field of view.
In this~representation, a number is assigned to each ! pixel location with the numbers being sequentially numbered started fro~ left to right. While Figure 17 shows pixel at the location X=O, Y=O (as ~hown in Figure 16) as being replaced by the number 1 and the number being sequential therefrom, any other number can also be used. ~n addition, the number assigned to the , I .
.j .
`J~
, ~ .., . , . . . . ' ' ? ' : ' . , ' ' ' .. , , , . , ' - i " ', . ' ' ~ ` . . ' ' ' ` ' -~6810-459 beglnnlng pixel ln each llne can be any number - so long as each successlve plxel, ln the same line, dlffers from the preceding plxel by the number 1.
Another representation of the electrical lmage of the example shown ln Flgure 16 ls a Green's functlon kernel for the lntegrated optlcal denslty of the lmage ln the fleld of vlew as shown ln Flgure 18. In thls representation, each pixel location P(Xm~ Yn) is asslgned a number whlch is calculated ln accordance as follows, ~, m ( m~ n) 151 1 n As prevlously dlscussed, the morphologlcal processor 40 18 capable ~; of calculatlng a Green's functlon kernel for the lntegrated optl-i cal denslty "on the fly" or ln "realtlme".
The vldeo lmage processor 10 also receives the electrl-cal lmage from the color camera 12 and generates posltional lnfor-matlon that represents the boundary of the ob~ect contained in the fleld of vlew. One method of calculatlng the posltlonal lnforma-tlon that represents the boundary of the ob~ect vlew ls dlsclosedln U.S. Patent No. 4,538,299. As dlsclosed ln the '299 patent, the dlgltlzed greyscale value (e.g. the lmage ln Flgure 16) ls compared to a pre-set threshold value such that as a , ., ',, ! ~ . .
~r L~ :
.
:;`
result of the co~parison, if the greyscale at the pixel location of interest exceeds the pre-set threshold value, then the value "1" is assigned to that pixel location. At all other locations if the greyscale value of the pixel of interest is below the pre-set threshold value, then a "O" is assiqned at that ~?~; location. As a result, the digital video image is converted to a representation where a value of "1" is assigned where there is an object and a value of "O" is assigned at locations which is outside the boundary of ~;
the object. An example of the conversion of the image hown in Figure 16 by this ~ethod i6 the representation ehown in Figure 19.
Thereafterwards, and in accordance with the '299 patent, the rQpresentation, a6 shown in Figure 19, is converted to a third representation by assigning a value to a pixel with a location (X,Y) in accordance with P(X,Y)=a*27+b*26+c*25+d*24+
e*23+f*22+g*2+h where a,b,c,d,e,f,g,h are the values of the eight nearest neighbors surrounding pixel (X,Y) in accorda~e with g d h cpixel (X,Y) a f b e '1 ''''' :j This can be done by the prime generator 80 portion of the morphological processor 40.
Finally, in accordance with the '299 patent, this third repreqentation is scanned until a first non-zero P(X,Y) value is reached. The P(X,Y) value i8 compared alonq with an input direction value to a look-up table to determine the next location of the non-zero value of P(X,Y) and formin~ a chaining code. In accordance with the teaching of the '299 patent, a positional information showing the location of the next pixel which is on the boundary of the object in the field of view is then generated. This positional information takes the form of Delta X=+l, 0, or -1 and Delta Y=+l, 0, or -1.
This generated positional information is also supplied to trace the locations in each of the other different representations.
For example, if the first value of the boundary l ~canned out i8 X=4, Y=l (as shown in Figure 19) that positional ~nformation is supplied to mark the ;~ locations in the representations shown in Figures 17 ¦ and 18, thereby markinq the sta~t of the boundary of ¦ the ob~ect in those representations. Thus, in Figure 17, the pixel location, having the value 13, is initially chosen. In Figure 18, the pixel location l having the value 44 is initially chosen.
:
'', 1328~19 "
In accordance with the teaching of the '299 patent, the next positional information generated which :;
denotes the next pixel location that is on the boundary :: of the ob~ect in the field of view would be Delta X=+l, ~ 5 Delta Y=+l. This would bring the trace to the location ~ X=5, Y=2. That positional information is also supplied i to the representations for the area, 6hown in Figure 17 and to the representation denoting the integrated optical density, as shown in Figure 18. The trace caused by the positional information would cause the representation in Figure 17 to move the pixel location X~5, Y=2 or where the pixel has the value 22.
Similarly, in Figure 18, the trace would cause the pixel location X=5, Y=2 or the pixel having the value . 15 76 to be traced. As the boundary of the object is traced, the same positional information is supplied to other representations denoting other parameters of the images of the field of view - which inherently do not . have information or the boundary of the object in the field of view. ~:
.~ It should be emphasized that although the method and the apparatus heretofore describes the positional : :
information as being supplied by the teaching disclosed . in the '299 patent, the present invention is not . 25 necessarily limited to positional information based :~
upon the '299 patent teaching. In fact, any source of ~:
.:
' .
.j, .. , ,, , . . . . . . .. - , . , , ,,, ~ ~ .,. , . , . . . - . .. .:
1~280~9 positional information can be used with the method and apparatus of the present invention, so long as that lnformation denotes the position of the boundary of the object in the field of view.
As the boundary of the object in the field of view , i5 traced out in each of the different representations, that represent the different parameters of the object in the field of view, the different parameters are calculated.
For example, to calculate the area of the object in the field of view, one takes the positional .~, information and determine the value of the pixel at that location. Thus, the first pixel value would have the value 13. Except for the first pixel, the location of the current pixel (Xi,Yi) i6 compared to the location of the previously traced pixel (Xj,Yj) such that if Yi iC less than Yj then the present value at Pi(Xi,Yi) is added to the value A. If Yi is qreater than Y; then the present value at Pi(Xi-l,Yi) is added to B. B i8 subtracted from A to derive the area of the object of view. The calculation is shown in Figure 20.
Similarly, for the calculation o~ the integrated i optical density, if the present pixel location (Xi,Yi) co~pared to the previously traced pixel location (Xj,Yj) is such that Yi is less than Yj, then Pi(Xi,Yi) is added to A. If Yi is greater than Yj then Pi(Xi- ;
.' .. . .
.`., .. :.
- 1,Yi) is added to B. B is subtracted from A to derive the integrated optical density of the object. This is shown in ~igure 21.
There are many advantages to th~ method and ~,' 5 apparatus of the present invention. First and foremost i8 that a~ the positional information regarding the boundary of an ob~ect of view is provided, multiple , parameters of that object can be calculated based upon s different representations of the image of the field of 0 ViQW containing the object - all of which representations do not inherently contain any s positional information regarding the location of the boundary of the object in the field of view. Further, , with the video image processor described, such different parameters can be calculated simultaneously, thereby greatly increasing image processing throughput.
.
., ~`t :
,1 :
.~ :
, ~ .
~ igures 5(a-c) are schematic block diagrams of ~arious possible configurations connecting the modules .; 5 to the data buses.
Figure 6 is a schematic block diagram of another embodiment of a video image processor shown with a , plurality of data buses which can be electronically 7 switched.
~igure 7 i6 a 6chemat~c block diagram of the portion of the video image processor shown in Figure 6 showing the logic unit and address decode unit and the switching means to electronically switch the data buses of the video image processor 6hown in Figure 6.
,l 15 Figure 8(a-c) show various possible embodiments as a result oi switching the data buses of the video image ~ processor shown in Figure 6.
.; Figure 9 i6 a detail circuit diagram of a portion of the 6witch and logic unit of the video image proc~ssor 6hown in Figure 6.
j Figure 10 is a schematic block diagram of the video processor module oi the video image processor ~, shown in Figures 2 or 6~
., Figure 11 is a schema~ic block diagram of an image . .
I 25 memory module of the video image processor shown in I Figures 2 or 6.
s . ~ ' : 1328019 ;,~ Figure 12 is a schematic block diagram of a morphological processor ~odule of the video image processor shown in Figures 2 or 6.
;i, Figure 13 i5 a graphic controller module of the S video image processor 6hown in Figures 2 or 6.
Figure 14 i6 a block schematic diagram of the master controller of the video image processor shown in Figures 2 or 6.
Figure 15 is a circuit diagram of another implementation of a logic unit.
Figure 16 is an example of a digitized image of the field of view with a particle contained therein.
Figure 17 is an example of the electrical image shown in Figure 16 processed to form a representation lS of the electrical image which i8 a repre~entation of the area of the field of view.
Flgure 18 i8 an example of the electrical image ~hown in Figure 16 processed to form a representation ~! of the electrical image which is a representation of the in~egrated optical density of the field of view.
Figure 19 i6 an example of the electrical ~mage ¦ shown in ~igure 16 processed to form a first 'I representation containing the boundary of the ob~ect in ;j the field of view, in accordance with the method as ~ 25 disclosed in U.S. Patent No. 4,538,299.
-13~8019 " --6--.
Figure 20 i6 an example of the calculation of the area of the object in the field of view of the example hown in Figure 16 in accordance with the ~et~od of the present invention.
Figure 21 is the calculation of the integrated optical density of the object in the field of view of ; the example shown in Figure 16 in accordance with the method of the present invention.
Detailed Description of the Drawinas Referrinq to Figure 1 there is shown an imaging system 8 of the present invention. The imaging system 8 comprises a video image processor 10, which receives analog video signals from a color camera 12. The color camera 12 i6 optically attached to a fluorescent $11uminator 14 which is focused through a microscope 16 and i8 directed at a stage 18. A source of illumination 20 provides the necessary electromagnetic ~-j radiation. The video imaging processor 10 communicates with a hoct computer 22. In addition, the host computer 22 has software 24 stored therein to operate lt. Finally, a full color monitor display device 26 receives the output of the video image processor 10.
There are many uses for the video image processor 10. In the embodiment shown in Figure 1, the imaging ~ystem 8 is used to analyze biological specimens, such ! ~s biopsy material, or constituents of blood. The ,' :;`
13280~9 ~" .
. .
; biological ~pecimen is mounted 9n a Glide and is placed -' on the stage 18. The video image of the slide as taken ; by the color camera 12 through the ~icroscope 16 is processed by the video image processor.
In preferred embodiment, the host computer 22 is a Motorola 68000 microprocessor and communicate~ with the ~' video image processor 10 via a Q-bus. The Q-bus is a standard comm,unication protocol developed by Digital ;~ Equipment Corporation.
As shown in Figure 2, the video image processor 10 comprises a master controller 30 and a plurality of electronic digital modules. Shown in Figure 2 are a plurality of processor modules: the video processor 34, graphic controller processor 36, morphological processor 40, and a plurality of image memory modules:
image memory modules 38a, 38b and 38c. The image memory modules store data which is representative of j the video images. The processor modules process the sl data or the video images. The master controller 30 communicates with each one of the plurality of digital modules (34, 36, 38 and 40) via a control bus 32. In ~dd~tion, the plurality of digital modules (34, 36, 38 and 40) co~municate with one another via a plurality of data buses 42.
In the video image processor 10, the master controller 30 controls the operation of each one of the '' .
. , .
13~8~19 .
plurality of digital modules (34, 36, 38 and 40j by passing control signals along the control bus 32. The bus 32 comprises a plurality of lines. The bus 32 comprises 8 bit lines for address, 16 bit lines for data, 4 bit lines of control, one line for vertical sync and one line for horizontal sync. In addition, there are numerous power and ground lines. The 4 bits of control include a signal for clock, ADAV, CMD, and 'l WRT ~the function of these control signals will be ~ 10 described later).
; The plurality of data buses 42, which interconnect the modules (34, 36, 38 and 40) with one another, comprise nine 8 bit wide data buses 42. The nine data buses 42 are desiqnated as 42A, 42B, 42C, 42D, 42E, 42F, 42G, 42H, and 42I, respectively.
¦ Within each module (34, 36, 38 and 40) is a ! communication means 54. Further, within each module is a logic unit means 52 which is responsive to the control signals on the control bus 32 for connecting ¦ 20 the communication means 54 of each module to one or more o~ the data buses 42.
Referr$ng to Figure 3 there is shown a schematic block diagram of the portion of each of the modules which is responsive to the control signals on the control bus 32 for interconnecting one or more of the data buses 42 to the communication means 54 within each '.
5, of the modules. Shown in Figure 3 is an address decode circuit 50. Th~ address decode circuit 50 is connected v to the eight ~ddress lines of the control bus 32. The ~ddress decode circuit 50 also outputs a signal 56 which activates its associated logic unit 520 Since each logic unit 52 has a unique ~ddress, if the address lines present on ths ~ddre6s decode 50 matches the address for that particular logic unit 52, the address decode 50 would send a signal 56 to activate that logic unit 52. Within each module, there can be a plurality ¦ of logic units 52 each with an associated address decoder 50. Each of the plurality of logic units 52 can perform different tasks.
~ The logic unit 52 receives the 16 bits of data 3 15 ~rom the 16 bits of data portion of the control bus 32.
~ In addition, the logic unit 52 can also be connected to ! the four control lines: clock, ADAV, CMD, WRT, as , previously described, of the control bus 32, and vertical sync and horizontal sync. The logic unit 52 ,1 20 will then control the operation of a plurality of tri-~ state transceivers S4A, ~4B, 54C, 54D, 54E, 54F, 54G:! and 54I. Thie being understood that there are eight individual tri-state transceivers 54 for the group of ~l tri-~tate transceivers 54A, and eight individual tri-state transceivers for the group of tri-state , transceivers 54B, etc. The function of the tri-state ,, .
'.
:, 1~280~9 , . ~10--transceivers 54 is to connect one or more of the data buses 42A to functions within the nodule of which the logic unit 52 and address decode circuit 50 is a part thereof. In addition, within the module, a cross-point switch 58 may be connected to all of the outputs of the tri-state transceivers 54 and ~nultiplex the lurality of tri-state transceivers 54 lnto a single 8 bit wide bus 60. -.j Referring to Figure 4 there is shown a simplistic example of the address decoder 50, the logic unit 52, and one of the group of transceivers 54A
interconnecting with the bus 42A. As previously stated, the eight address signal line~ of the control `~ bus 32 are supplied to the address decoder 50. If the addre3s supplied on the address lines of the control ~ bus 32 correctly decodes to the address of the 'ogic ;, unit 52, the address decoder 50 sends a signal 56 going .¦ high which is supplied to the logic unit 52. The address decode circuit 50 can be of conventional .~ 20 design.
.I Logic unit 52 comprises two AND gates 62A and 62B
whose outputs are connected to J-K flipflop 64a and 64B
,l respectively. The AND gates 62A and 62B receive at one of the inputs thereof the control signal 56 from the address decoder 50. The other input to the AND gates 62A and 62B are from the data lines of the control bus , ' ' . ' ,. . " ' .
., ~,. .
',~
32. If the address decoder 50 determines that the logic unit 52 is to be activated, as determined by the correct address on the address lines of the control bus 32, the control ~ignal 56 going high gates in to the flipflop 64A and 64B the data present on the data lines of the control bus 32. The output of the 3-X flipflop 64A and 64B are used to control the eight tri-state transceivers 54Ao...54A7. Each of the eight tri-state transceivers has as one terminal thereof connected to '~ 10 one of the eight bit communication paths of the bus .~1 42A. The other terminal of each of the tri-state , tran6ceivers 54A is connected to electronic elements ; within th~ module.
The tri-state transceivers 54A, as the name ~j 15 suggests, has three states. The transceivers 54A can ~, provide communication to the data bus 42A. The tri-state transceivers 54A can provide data communication :l from the data bus 42A. In addition, the tri-state transceivers 54A can be in the open position in which case no co~munication occurs to or from the data bus 42A. As an example, the tri-state transceivers 54A are components manufactured by Texas Instru~ents designated as 74AS620. These tri-state transceivers 54A receive two inputs. If the inputs have the combination of 0 and 1, they denote communication in one direction. If the tri-state transceivers receive the inputs of 1 and 13280~ 9 .
:, O, they denote com~unication in the opposite direction.
If the tri-state transceivers 54A receive O O on both ` input lines, then the tri-~tate transceivers 54A are in r the open position. Since the tri-state transce.vers s4Ao . . . 54A7 are all switched in the same manner, i.e.
i either all eight lines are connected to the data bus .
42A, or they are not, the output of the flipflop 64A
and 64~i are used to control all eight transceivers to s interconnect one of the data buses. The logic unit 52 can also comprise other flipflops and control gates to control other tri-state transceivers which are grouped in groups of eight to gang the switching of the selection o~ connection to one or more of the other . data buses 42.
. 15 Because the interconnection~of one or more of the ¦ data buses 42 to one or more of the plurality of modules (34, 36, 38 and 40), is under the control of i the control bus 32, the data paths for the connection of the data buses 42 (A-I) can be dynamically , 20 reconfigured.
Referring to Figure 5a, there is shown one ;
possible configuration with the dynamically reconfigurable data buses 42. Since each data bus 42 i6 8 bits wide, the plurality of modules (34, 36, 38 and 40) can be connected to receive data from two data buses (e.g. 42~ and 42B), simultaneously. ~his is data .~.
,' '. .
:
processing in the parallel mode in which 16 bits of data are siimultaneously processed along the data bus.
Thus, the data buses 42 can be ganged together to increase the bandwidth of data transmission.
s 5 Referring to Figure 5b, there is another possible configuration for the data buses 42. In this mode of operat~on, module 34 can transmit data on data bus 42A
to module 36. Module 36 can communicate data with module 38 along the data bus 42B. Finally, module 38 -~ 10 can communicate with module 40 along the data bus 42C.
"
, In this mode, which i8 termed pipeline processing, data 3 can flow from one module to another sequentially or simultaneously since data is flowing on separate and l unique data buses.
Referring to Figure 5c, there is shown yet another 3 possible configuration for the data bus 42. In this mode the operation is known as macro interleaving. If, for example, the module 34 is able to process or transmit data faster than the modules 36 or 38 can i 20 receive them, module 34 can send every odd data byte to module 36 along data bus 42A and every even data byte along bus 42~ to the module 38. In this manner, data can be stored or processed at the rate of the fastest module. This is unlike the prior art where a plurality of modules must be operated at ~he speed of the slowest module.
Thus, as can be seen by examples shown in Figures 4a-4c, with a dynamically reconfigurable data bus .. . .. .
structure, a variety of data transmission paths, . including but not limited to those shown in Figures 4(a-c), can be dynamically and electronically ;. reconfigured.
3 Referring to Figure 6, there i8 ~hown yet another embodiment of a video image processor 110. The video image processor 110, similar to th~ video image processor 10 comprises a master controller 130 and a ~ plurality of digital modules 134, 136 (not shown), 138 q (A-B) and 140. These modules, similar to the modules 34, 36, 38 and 40, perform the respective tasks of image processing and image storing. The master .
controller 130 communicates with each one of the , modules via a control bus 132. Each one of the modules ~j 134-140 i5 also connected to one another by a plurality .l of data buses 42A-42I. Similar to the video image processor 10, there are nine data buses, each bus being 8 bits ~ide.
l ~he only difference between the video image ;l processor 110 and the video image processor 10 is that .~ along each of the data buses 42 is interposed a switch means 154 controlled by a logic unit 152 which is . 25 activated by an address decode circuit 150. This is shown in greater detail in Figures 7 and 9. As can be :
'~:
:;
~: -15-iieen in Figure 6, the switch m~ans ls4A...154I are interposed between the image memory module 138A and image memory module 138B. That is, the switch means 154A...154I, divide the data buses 42A...42I into two sections: the first section comprising of the video processor module 134 and the image memory module 138A;
the second part comprising the morphological processor 140 and the second image memory module 138B. The switch means lS4 provide the capability of either ~ 10 connecting one part of the data bus 42A to the other ;~ part or leaving the data bus open, i.e. the data bus sever2d.
Referring to Figures 8a-8c, there is shown various configurations of the possible data bus structure that results from using the switch means 154A-154I.
Figure 8a shows nine data buses 42A-42I, wherein the switch means 154A, 154B and 154C connect the data ¦ buses 42A, 42B and 42C into one c~ntinuous data bus.
However, the switch means 154D...154I are left in the open position thereby severing the data buses 42D...42I
into two portions. In this mode of operation, parallel processing can occur simultaneously using the data buses 42D...42I by the modules 134, and 138 and by the ~odules 138 and 140. In addition, serial or pipeline processing can occur along the data buses 42A...42C.
As before, with the switch means 154A...154I, `- 1328019 ~
dynamically ~electable, total parallel prooessing as ; shown in Figure ~b or total pipeline processing as ;`. shown in Figure ~c are also possible. In addition, of course other configurations including but not limited to the macro interleave configuration of Figure 5c, are alio possible.
: Referring to Figure 7, there is sihown a schematic block diagram of the electronic circuits used to control the data 42A...421 of the video image processor 110. As previously stated, a switch means 154 is interposed between two halves of each data bus 42.
Shown in Figure 7, i8 the switch means 154A interposed ! in the data bus 42A and the switch means 154I
¦ interposed in the data bus 42I. Each one of the switch . 15 means 154 is controlled by the logic unit 152 which is . activated by the address decode circuit 150. Similar . to the address decode circuit 50, the address decode .150 is connected to the eight address lines of the :.
control bus 132. If the correct address is detected, the control isignal 156 is sent to the logic unit 152.
The control signal 156 activates the logic unit 152 which in turn activates one or more of the switch means 154.
Referring to Figure 9, there is shown a detailed simplistic schematic circuit diagram of the logic unit 152 and the switch means 154A. As can be seen, the 1~2~019 logic 152 is identical to the logic unit 52. The - switch means 154 (a tri-state transceiver) interconnects one half of one of the bus lines to the other half of the bus line 42. In all other respects, the operation of the switch means 154, logic unit 152, and the address decode circuit 150 is identical to that shown and described for the address decode circuit 50, ; logic unit 52, and switch means 54.
As previously stated, the reconfigurable data buses 42, interconnect the plurality of modules (34, 36, 38 and 40) to one another. The modules comprise a plurality of processor modules and a plurality of 3 memory modules. With the exception of the ; communication means, logic unit and address decode circuit, the rest of the electronic circuits of each module for processing or storing data can be of co~ventional design. one of the processor modules 34 is the video processor module.
~he ~ideo processor module 3~ is shown in block ~ 20 diagram form in Figure 10. The video processor 34 i receives three analog video signals from the color camera 12. The three analog video ~ignals comprising signals representative of the red, green, and blue images, are processed by a DC restoration analog circuit 60. Each of the xesultant signals is then digitized by a digitizer 62. Each of the three -~8-digitized video signals is the analog video ~ignal from the colsr camera 12, segmented to form a plurality of -~ image pixels and with each image pixel digitized toform a greyscale value of 8 bits. The digitized video ; 5 signal~ are supplied to a 6x6 cross-point matrix switch i 64 which outputs the three digitized video signals onto three of the ~ix data buses (42A-42F).
From the data buses 42A-42F, the digitized video signals can be stored in one or more of the image memory modules 38A-38C. The selection of a particular ~,~ image memory module 38A-38C to store the digitized video signals is accomplished by the address decode ¦ circuit 50 connected to the logic unit 52 which activates the particular tri-state tran6ceivers 54, all ¦ 15 as prevLously described. The dat~a selection of which data bus 42 the digitized video images would be Qent to is based upon registers in the logic unit 52 which are set by the control bus 32.
~, Each of the memory modules 38 contains three megabytes of memory. The three megabytes of memory is ~urther divided into three memory planes: an upper ! plane, a middle plane, and a lower plane. Each plane o~ memory comprises 512 x 204~ bytes of memory. Thus, there is approximately one megabyte of memory per memory plane.
, '' -1~28~19 , Since each digitized video image is stsred in a memory ~pace of 256 x 256 bytes, each memory plane has room for 16 video images. In total, a memory module has room for the ~torage of 48 video images. The ; 5 address of the selection of the particular video image ,., from the particular memory plane within each memory module is supplied along the control ~us 32. As the data is supplied to or received from each memory module 38, via the data buses 42, it is supplied to or from the locations specified by the address set on the control bus 32. The three digitized video images from the video processor 34 are stored, in general, in the same address location within each one of the memory planes of each memory module.
Thus, the digital video signal representative of the red video image may be stored in the starting address location of x=256, y=0 of the upper memory plane: the digitized signal representative of the blue video image may be stored in x=256, y=0 of the middle ~emory plane; and the digital video signal representative of the green video image may be stored ; in x-256, ys0 of the lower memory plane.
Once the digital video signals r~presentative of the digitized video images are stored in the memory ZS planes of sne or more memory modulez 38, the digitized . ~.
., .
. ~ .
132801~ :
, . ..
video images are operated upon by the morphological processor 40.
The morphological processor 40 receives data from ,i. the data buses 42A-42D and outputs data to the data , 5 buses 42E-42G. Further, the morphological processor 40 can receive input or output data to and from the data buses 42H and 42I. Referring to Figure 12, there is shown a gchematic block diagram of the morphological processor 40. The morphological processor 40 receives . 10 data from data buses 42A and 42B which are supplied to 'i~ a multiplexer/logarithmic unit 70. The output of the multiplexer/logarithmic unit 70 (16 bits) iB either data from the data buses 42A and 42B or i8 the logarithm thereof. The output of the multiplexer/logarithmic unit 70 is supplied as the . input to the ALU 7~, on the input port designated as b.
The ALU 72 has two input ports: a and b.
The morphological processor 40 also comprises a . multiplier accumulator 74. The multiplier accumulator : 20 74 receives data from the data buses 42C and 42D andfrom the data buses 42H and 42I respectively, and ,~ performs the operation of multiply and accumulate .~ thereon. The multiplier accumulator ~4 can perform the ;I functions of 1) multiplying the data from (data bus 42C
or data bus 42D) by the data from (data bus 42H or data bus 42I~: or 2) multiplying the data from (data bus 42C
'I :
.. ~. ... . . . . . . .. . . .
or data bus 42D) by a constant as supplied from the master controller. ~he reæult of that calculation is outputted onto the data bu~es 42I, 42H and 42G. The result of the multiply accumulate unit 74 is that it calculates a Green'~ function kernel in realtime. The Green's function kernel is a summation of all the pixel values from the start of the horizontal ~ync to the then current pixel. This would be used subsequently in calculation of other properties of the image.
A portion of the result of the multiplier ... .
accumulator 72 (16 bits) is also inputted into the ALU
72, on the input port designated as a. The multiplier accumulator 74 can perform calculations of multiply and accumulate that are 32 bits in precision. The result of tha multiplier accumulator 74 can be switched by the multiplier accumulator 74 to be the most significant 16 ¦ bits or the least significant 16 bits, and is supplied ~ to the a input of the ALU 72.
;~ The output of the ALU 72 is supplied to a barrel shifter 76 which is then supplied to a look-up table 78 and i8 placed back on the data buses 42E and 42F. The ~ output o~ the ALU 72 is also supplied to a prime ! generator 80 which can also be placed back onto the data buses 42E and 42F. The function of the prime generator 80 is to determine the boundary pixels, as described in U.S. Patent No. 4,538,299.
: ' 132~019 ~ The ALU 72 can also perform the function of ; subtracting data on the input port a from data on the input port b. The result of the subtraction is an ~ overflow or underflow condition, which determines a>b ;` 5 or a<b. Thus, the pixel-by-pixel ~aximum and minimum for two images can be calculated.
Finally, the ALU 72 can perform histogram calculations. There are two types of histogram calculation. In the first type, the value of a pixel (a pixel value is 8 bits or is between 0-255), selects the address of the memory 73. The memory location at the selected address is incremented by 1. In the second type, two pixel values are provided: a first pixel value Or the current pixel location and a second pixel value at the pixel location of a previous line to ~ the immediate left or to the immediate right (i.e.
7 diagonal neighbor). The pairs of pixel values are used to address a 64K memory (256 x 256) and the memory ¦ location of the selected pixel is incremented. Thus, this histogram i8 texture related.
In summary, the morphological processor 40 can perform the functions of addition, multiplication, l multiplication with a constant, summation o~ a line, j finding the pixel-by-pixel minimum and maximum for two ;! 25 images, prime generation, and also histogram I calculation. The r~sults of the morphological ;. :
':
1328~19 processor 40 are sent along the data buses 42 and 6tored in the image memory modules 38. The ALU 72 can be a ~tandard 181 type, e~g. Texas Instruments part #
ALS181. The multiplier accumulator 74 can be of conventional design, such as Weitech WTL2245.
Referring to Figure 13, there is shown the graphic controller processor 36, in chematic block diagram form. The function of the ~raphic controller 36 ~s to receive processed digitized video images from the memory modules 38, graphic data, and alphanumeric data 1~ .
and combine them for output. The data from the control 3 bus 32 is supplied to an Advanced CRT controller 84.
The C~T controller is a part made by Hitachi, part number HD 63484 . The output of the advance CRT
controller 84 controls a frame buffer 80. Stored within the frame buffer 80 are the graphics and alphanumeric data. The video images from the data buses 42A-42F are also supplied to the graphics controller processor 36. One of the data buses 42 is selected and that combined with the output of the frame buffer 80 is supplied to a look-up table 82. The j output of look-up table 82 is then supplied as the I output to one of the data buses 42G, 42H or 42I. The I function of the graphics control processor 36 is to overlay video alpha and graphics information and then through a D to A to converter 86 is supplied to the -~
:' ~328019 monitor 26. In addition, the digital overlayed image can also be stored in one of the image memory modules . 38.
The image which is received by the graphics ~; 5 control processor 36 from one of the image memory ~ modules 38 i8 through one of the data buses 42A-42F.
5~ The control signals along the control bus 32 specifies ,~
to the image memory module 36 the starting address, the ~, x and y offset with regard to vertical sync as to when the data from the ima~e memory within that memory module 38 is to be outputted onto the data buses 42A-42F. Thus, split ~creen images can be displayed on the display monitor 26.
The master controller 30, as previously stated, communicates with the host computer 22 via a Q-bus.
. The master controller 30 receives address and data information from the host computer 22 and produces a 64 bit microcode. The 64 bit microcode can be from the writable control store location of the host computer 22 . 20 and i8 stored in WCS 90 or it can be from the proxy prom 92. The control program within the proxy pro~ 92 : iB used upon power up as WCS 90 contains volatile RAM.
: The 64 bit microcode is processed by the 29116 ALU 94 of the master controller 30. The master controller 30 is of the Harvard architecture in that separate memory exists for instruction as well as for data. Thus, the ~,`
., .
^*~, ~ :s~,~, f 1328~9 ~'``
: -2S-processor 94 can get instruction and data ,~ 6imultaneously. In addition, the master controller 30 ;. comprises a background ~equencer 96 and a foreground equencer 98 to sequence serie~ of program instruction 6tored in the writable control storage 90 or the proxy prom 92. The Q-bus memory map from which the master controller 30 receives its writable control store and its program memory is shown as below:
, ~ .
: ADDRESS (HEXADECIMAL) Use 103FFFFF) BS7 (Block 7 ) conventional Digital ~ ) Eqipment Corp.
:,~ 3FE000) nonemclature) 3FDFFF) Scratch Pad 153FA000) 387FFF) Writable Control 380000) Store 37FFFF) Image Memory 3 280000) Window 20lFFFFF) Host Computer 0 ) Program Memory , ., . . "~
. ~ ~
':
;.
In addition, the control signals ADAV, CMD and WRT
have the following uses.
~; CONTROL SIG~ALS Use A~AV CMD WRT
0 X X Quiescent Bus 1 1 0 Read Register 1 1 1 Write Register ~; 1 0 0 Read Image Memory 1 0 1 Write Image Memory The ~aster controller 30 operates synchronously with each one of the modules 34, 36, 38 and 40 and asynchronously with the host ~omputer 22. The clock signal is generated by the master controller 30 and is ~ent to every one of the modules 34, 36, 38 and 40. In addition, the master controller 30 starts the operation of the entire sequence of video image processing and video i~age storing upon the start of vertical sync.
Thus, one of the signals to each of the logic units 52 is a vertical sync signal. In addition, horizontal sync signals may be ~upplied to each one of the logic units.
The logic unit6 may al60 contain logic memory elements that switch their respective tri-state transceivers at prescribed times wi~h respect to the horizontal sync and the vertical sync signals.
! Referring to Figure 15, there is shown a schematic :~
~ . .
~ ' ;~ 1328019 diagram of another embodiment of a logic unit 252. The logic unit 252 i8 connected to a first address decode circuit 250 and a second address decode circuit ~51.
The logic unit 252 comprises a first AND gate 2S4, a second AND gate 256, a counter 258 and a vertical ~ync register 260.
Prior to the operation of the logic unit 252, first address decode circuit 2S0 is activated loading '7 the data from the date lines of the control bus 32 into the counter 258.
Thereafter, when the second address decode circuit 251 is activated, and vertical 6ync signal is received, the counter 258 counts down from each clock pulse received. When the counter 258 reaches zero, the tri-state register6 64a and 64b are activated.
It should be emphasized that the master controller 30, each one of the processing modules 34, 36, 38 and 40 and each one of the image memory modules 38 can be of conventional design. The master controller 30 controls the operation of each one of the modules along a separate control bus 32. Further, each of the modules communicates with one another by a plurality of d~ta buses 42. The interconnection of each one of the ~odules (34-40) with one or more of the data buses 42 ¦ 25 is accomplished by means within the module (34-40) which i9 controlled by the control signals along the il .:
.. . .
control bus 32. The interconnection of the data buses ~- 42 to the electronic function within each of the modules is as previously described. However, the electronic function within each of the modules, such as memory storage or processing can be of conventional 3 architecture and design.
In the apparatu~ 8 of the present invention, an image of the field of view as seen through the microscope 16 is captured by the color camera 12. The color camera 12 converts the image in the field of view into an electrical image of the field of view. In reality, three electrical images are converted. The electrical images fxom the color camera 12 are processed by the image processor 10 to form a plurality of different representations of the electrical image.
Each different representation is a representation of a different parameter of the field of view. One representation ic the area of interest. Another representation is the integrated optical density.
; 20 Referring to Figure 16 there is shown an example of the digitized electrical signal representative of 7 the electrical image of the field of ~iew. The ¦ digitized image shown in Figure 16 is the result of the output of the video processor module 34, which segments , 25 and digitizes the analog signal from the color camera ¦ 12. (For the purpose of this discussion, only one ,.i - .
` ' ,~, ..... , , .;, , j, 13280~9 :.~
electrical image of the field of view will be ; discussed. However, it is readily understood that there are three video images, one for each color r component of the field of view.) As shown in Figure 16, each pixel point has a certain amplitude representing the greyscale value. The ob;ect in the field of view is located within the area identified by the line 200. Line 200 encloses the object in the field of view.
As previously stated, the image processor 10 and more particularly the morphological processor module 40 processes the diqitized video image to form a plurality of different processed digitized video images, with each different processed digitized video image being a different representation of the digitized video image.
? One representation of the electrical image shown in Figure 16 is shown in Figure 17. This is the representation which represents a Green's function ¦ kernel for the area of the image in the field of view.
In this~representation, a number is assigned to each ! pixel location with the numbers being sequentially numbered started fro~ left to right. While Figure 17 shows pixel at the location X=O, Y=O (as ~hown in Figure 16) as being replaced by the number 1 and the number being sequential therefrom, any other number can also be used. ~n addition, the number assigned to the , I .
.j .
`J~
, ~ .., . , . . . . ' ' ? ' : ' . , ' ' ' .. , , , . , ' - i " ', . ' ' ~ ` . . ' ' ' ` ' -~6810-459 beglnnlng pixel ln each llne can be any number - so long as each successlve plxel, ln the same line, dlffers from the preceding plxel by the number 1.
Another representation of the electrical lmage of the example shown ln Flgure 16 ls a Green's functlon kernel for the lntegrated optlcal denslty of the lmage ln the fleld of vlew as shown ln Flgure 18. In thls representation, each pixel location P(Xm~ Yn) is asslgned a number whlch is calculated ln accordance as follows, ~, m ( m~ n) 151 1 n As prevlously dlscussed, the morphologlcal processor 40 18 capable ~; of calculatlng a Green's functlon kernel for the lntegrated optl-i cal denslty "on the fly" or ln "realtlme".
The vldeo lmage processor 10 also receives the electrl-cal lmage from the color camera 12 and generates posltional lnfor-matlon that represents the boundary of the ob~ect contained in the fleld of vlew. One method of calculatlng the posltlonal lnforma-tlon that represents the boundary of the ob~ect vlew ls dlsclosedln U.S. Patent No. 4,538,299. As dlsclosed ln the '299 patent, the dlgltlzed greyscale value (e.g. the lmage ln Flgure 16) ls compared to a pre-set threshold value such that as a , ., ',, ! ~ . .
~r L~ :
.
:;`
result of the co~parison, if the greyscale at the pixel location of interest exceeds the pre-set threshold value, then the value "1" is assigned to that pixel location. At all other locations if the greyscale value of the pixel of interest is below the pre-set threshold value, then a "O" is assiqned at that ~?~; location. As a result, the digital video image is converted to a representation where a value of "1" is assigned where there is an object and a value of "O" is assigned at locations which is outside the boundary of ~;
the object. An example of the conversion of the image hown in Figure 16 by this ~ethod i6 the representation ehown in Figure 19.
Thereafterwards, and in accordance with the '299 patent, the rQpresentation, a6 shown in Figure 19, is converted to a third representation by assigning a value to a pixel with a location (X,Y) in accordance with P(X,Y)=a*27+b*26+c*25+d*24+
e*23+f*22+g*2+h where a,b,c,d,e,f,g,h are the values of the eight nearest neighbors surrounding pixel (X,Y) in accorda~e with g d h cpixel (X,Y) a f b e '1 ''''' :j This can be done by the prime generator 80 portion of the morphological processor 40.
Finally, in accordance with the '299 patent, this third repreqentation is scanned until a first non-zero P(X,Y) value is reached. The P(X,Y) value i8 compared alonq with an input direction value to a look-up table to determine the next location of the non-zero value of P(X,Y) and formin~ a chaining code. In accordance with the teaching of the '299 patent, a positional information showing the location of the next pixel which is on the boundary of the object in the field of view is then generated. This positional information takes the form of Delta X=+l, 0, or -1 and Delta Y=+l, 0, or -1.
This generated positional information is also supplied to trace the locations in each of the other different representations.
For example, if the first value of the boundary l ~canned out i8 X=4, Y=l (as shown in Figure 19) that positional ~nformation is supplied to mark the ;~ locations in the representations shown in Figures 17 ¦ and 18, thereby markinq the sta~t of the boundary of ¦ the ob~ect in those representations. Thus, in Figure 17, the pixel location, having the value 13, is initially chosen. In Figure 18, the pixel location l having the value 44 is initially chosen.
:
'', 1328~19 "
In accordance with the teaching of the '299 patent, the next positional information generated which :;
denotes the next pixel location that is on the boundary :: of the ob~ect in the field of view would be Delta X=+l, ~ 5 Delta Y=+l. This would bring the trace to the location ~ X=5, Y=2. That positional information is also supplied i to the representations for the area, 6hown in Figure 17 and to the representation denoting the integrated optical density, as shown in Figure 18. The trace caused by the positional information would cause the representation in Figure 17 to move the pixel location X~5, Y=2 or where the pixel has the value 22.
Similarly, in Figure 18, the trace would cause the pixel location X=5, Y=2 or the pixel having the value . 15 76 to be traced. As the boundary of the object is traced, the same positional information is supplied to other representations denoting other parameters of the images of the field of view - which inherently do not . have information or the boundary of the object in the field of view. ~:
.~ It should be emphasized that although the method and the apparatus heretofore describes the positional : :
information as being supplied by the teaching disclosed . in the '299 patent, the present invention is not . 25 necessarily limited to positional information based :~
upon the '299 patent teaching. In fact, any source of ~:
.:
' .
.j, .. , ,, , . . . . . . .. - , . , , ,,, ~ ~ .,. , . , . . . - . .. .:
1~280~9 positional information can be used with the method and apparatus of the present invention, so long as that lnformation denotes the position of the boundary of the object in the field of view.
As the boundary of the object in the field of view , i5 traced out in each of the different representations, that represent the different parameters of the object in the field of view, the different parameters are calculated.
For example, to calculate the area of the object in the field of view, one takes the positional .~, information and determine the value of the pixel at that location. Thus, the first pixel value would have the value 13. Except for the first pixel, the location of the current pixel (Xi,Yi) i6 compared to the location of the previously traced pixel (Xj,Yj) such that if Yi iC less than Yj then the present value at Pi(Xi,Yi) is added to the value A. If Yi is qreater than Y; then the present value at Pi(Xi-l,Yi) is added to B. B i8 subtracted from A to derive the area of the object of view. The calculation is shown in Figure 20.
Similarly, for the calculation o~ the integrated i optical density, if the present pixel location (Xi,Yi) co~pared to the previously traced pixel location (Xj,Yj) is such that Yi is less than Yj, then Pi(Xi,Yi) is added to A. If Yi is greater than Yj then Pi(Xi- ;
.' .. . .
.`., .. :.
- 1,Yi) is added to B. B is subtracted from A to derive the integrated optical density of the object. This is shown in ~igure 21.
There are many advantages to th~ method and ~,' 5 apparatus of the present invention. First and foremost i8 that a~ the positional information regarding the boundary of an ob~ect of view is provided, multiple , parameters of that object can be calculated based upon s different representations of the image of the field of 0 ViQW containing the object - all of which representations do not inherently contain any s positional information regarding the location of the boundary of the object in the field of view. Further, , with the video image processor described, such different parameters can be calculated simultaneously, thereby greatly increasing image processing throughput.
.
., ~`t :
,1 :
.~ :
, ~ .
Claims (20)
1. A method for generating a plurality of parameters of an object in a field of view, said method comprising the steps of:
(a) forming an electrical image of said field of view;
(b) processing said electrical image to form a plurality of different representations of said electrical image; wherein each different representation is a representation of a different parameter of said field of view;
(c) generating positional information that represents the boundary of said object;
(d) tracing locations in each of said different representations in response to the positional information generated; and (e) calculating the different parameters from each of said different representations based upon locations traced in each of said different representations.
(a) forming an electrical image of said field of view;
(b) processing said electrical image to form a plurality of different representations of said electrical image; wherein each different representation is a representation of a different parameter of said field of view;
(c) generating positional information that represents the boundary of said object;
(d) tracing locations in each of said different representations in response to the positional information generated; and (e) calculating the different parameters from each of said different representations based upon locations traced in each of said different representations.
2. The method of claim 1 where said step (b) further comprises the steps of:
(b)(1) segmenting said electrical image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
(b)(2) processing said electrical signals to form a plurality of different representations of said electrical image; wherein each different representation is a representation of a different parameter of said field of view.
(b)(1) segmenting said electrical image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
(b)(2) processing said electrical signals to form a plurality of different representations of said electrical image; wherein each different representation is a representation of a different parameter of said field of view.
3. The method of claim 2 wherein one of said plurality of parameters of said object is the area of said object.
4. The method of claim 3 wherein said step (b)(2) further comprises the steps of:
assigning a number to each pixel location with said numbers being sequential starting from left to right for the representation that is a representation of the area of said field of view.
assigning a number to each pixel location with said numbers being sequential starting from left to right for the representation that is a representation of the area of said field of view.
5. The method of claim 4 wherein said step (e) further comprises the steps of:
(i) if present pixel location (Xi, Yi) compared to previously traced pixel location (Xj, Yj ) is such that Yi<Yj, then adding present pixel value Pi(Xi, Yi) to A;
Yi>Yj, then adding present pixel value Pi(xi-1, Yi) to B
(ii) subtracting B from A to derive the area of said object.
(i) if present pixel location (Xi, Yi) compared to previously traced pixel location (Xj, Yj ) is such that Yi<Yj, then adding present pixel value Pi(Xi, Yi) to A;
Yi>Yj, then adding present pixel value Pi(xi-1, Yi) to B
(ii) subtracting B from A to derive the area of said object.
6. The method of claim 2 wherein one of said plurality of parameters of said object is the integrated optical density of said object.
7. The method of claim 6 wherein said step (b)(2) further comprises the steps of:
assigning a number to each pixel location (Xm, Yn) with said number calculated as follows where P(Xi,Yn) is the grey scale value at the pixel location of (Xi;Yn), for the representation that is a representation of the integrated optical density of said field of view.
assigning a number to each pixel location (Xm, Yn) with said number calculated as follows where P(Xi,Yn) is the grey scale value at the pixel location of (Xi;Yn), for the representation that is a representation of the integrated optical density of said field of view.
8. The method of claim 7 wherein said step (e) further comprises the steps of:
(i) if present pixel location (Xi,Yi) compared to the previously traced pixel location (Xj,Yj) is such that Yi<Yj, then adding present pixel value Pi(Xi,Yi) to A
Yi>Yj, then adding present pixel value Pi(Xi-1,Yi) to B
(ii) subtracting B from A to derive the integrated optical density of said object.
(i) if present pixel location (Xi,Yi) compared to the previously traced pixel location (Xj,Yj) is such that Yi<Yj, then adding present pixel value Pi(Xi,Yi) to A
Yi>Yj, then adding present pixel value Pi(Xi-1,Yi) to B
(ii) subtracting B from A to derive the integrated optical density of said object.
9. The method of claim 1 wherein said step (c) further comprises the steps of:
(c)(1) segmenting said electrical signal into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value to form a first representation of said image;
(c)(2) processing the electrical signal of each of said greyscale value of said first representation to form a second representation of said image by comparing the greyscale value of each pixel to a pre-set threshold value such that as a result a "O" is assigned at each pixel location which is outside the boundary of said object and a "1" is assigned everywhere else;
(c)(3) converting said second representation into a third representation by assigning a value to a pixel (x,y) in accordance with P(X,Y)=a*27+b*26+c*25+d*24 e*23+f*22+g*2+h where a,b,c,d,e,f,g,h are the values of the eight nearest neighbors surrounding pixel (X,Y) in accordance with g d h c pixel (X,Y) a f b e
(c)(1) segmenting said electrical signal into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value to form a first representation of said image;
(c)(2) processing the electrical signal of each of said greyscale value of said first representation to form a second representation of said image by comparing the greyscale value of each pixel to a pre-set threshold value such that as a result a "O" is assigned at each pixel location which is outside the boundary of said object and a "1" is assigned everywhere else;
(c)(3) converting said second representation into a third representation by assigning a value to a pixel (x,y) in accordance with P(X,Y)=a*27+b*26+c*25+d*24 e*23+f*22+g*2+h where a,b,c,d,e,f,g,h are the values of the eight nearest neighbors surrounding pixel (X,Y) in accordance with g d h c pixel (X,Y) a f b e
10. The method of claim 9 wherein said step (d) further comprises the steps of:
scanning said third representation until a first non-zero P(X,Y) value is reached;
comparing said P(X,Y) value and an input direction value to a look-up table to determine the next location of the non-zero value of P(X,Y) and forming a chaining code.
scanning said third representation until a first non-zero P(X,Y) value is reached;
comparing said P(X,Y) value and an input direction value to a look-up table to determine the next location of the non-zero value of P(X,Y) and forming a chaining code.
11. An apparatus for generating a plurality of parameters of an object in a field of view, said apparatus comprising:
imaging means for forming an electrical image of said field of view;
means for processing said electrical image to form a plurality of different representations of said electrical image; wherein each different representation is a representation of a different parameter of said field of view;
means for generating positional information that represent the boundary of said object;
means for tracing locations in each of said different representations in response to the positional information generated; and means for calculating the different parameters from each of said different representations based upon locations traced in each of said different representations.
imaging means for forming an electrical image of said field of view;
means for processing said electrical image to form a plurality of different representations of said electrical image; wherein each different representation is a representation of a different parameter of said field of view;
means for generating positional information that represent the boundary of said object;
means for tracing locations in each of said different representations in response to the positional information generated; and means for calculating the different parameters from each of said different representations based upon locations traced in each of said different representations.
12. The apparatus of claim 11 wherein said processing means further comprises:
means for segmenting said electrical image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
means for processing said electrical signals to form a plurality of different representations of said electrical image; wherein each different representation is a representation of a different parameter of said field of view.
means for segmenting said electrical image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
means for processing said electrical signals to form a plurality of different representations of said electrical image; wherein each different representation is a representation of a different parameter of said field of view.
13. The apparatus of claim 12 wherein one of said plurality of parameters of said object is the area of said object.
14. The apparatus of claim 13 wherein said processing means further comprises:
means for assigning a number to each pixel location with said numbers being sequential starting from left to right for the representation that is a representation of the area of said field of view.
means for assigning a number to each pixel location with said numbers being sequential starting from left to right for the representation that is a representation of the area of said field of view.
15. The apparatus of claim 14 wherein said calculating means further comprises:
means for adding the value of the present pixel location Pi(Xi,Yi) to A if Yi<Yj and Pi(Xi-1,Yi) to B if Yi>Yj where Yj is the Y component of (Xj,Yj), the location of the immediately preceding pixel that was traced; and means for subtracting B from A to derive the area of said object.
means for adding the value of the present pixel location Pi(Xi,Yi) to A if Yi<Yj and Pi(Xi-1,Yi) to B if Yi>Yj where Yj is the Y component of (Xj,Yj), the location of the immediately preceding pixel that was traced; and means for subtracting B from A to derive the area of said object.
16. The apparatus of claim 12 wherein one of said plurality of parameters of said object is the integrated optical density of said object.
17. The apparatus of claim 16 wherein said processing means further comprises:
means for assigning a number to each pixel location (Xm,Yn) with said number calculated as follows:
P(Xm,Yn) = ? P(Xi,Yn) where P(Xi,Yn) is the greyscale value at the pixel location of (Xi,Yn).
means for assigning a number to each pixel location (Xm,Yn) with said number calculated as follows:
P(Xm,Yn) = ? P(Xi,Yn) where P(Xi,Yn) is the greyscale value at the pixel location of (Xi,Yn).
18. The apparatus of claim 17 wherein said calculating means further comprises:
means for adding the value of the present pixel location Pi(Xi,Yi) to A if Yi<Yj and Pi(Xi-1,Yi) to B if Yi>Yj where Yj is the Y component of (Xj,Yj), the location of the immediately preceding pixel that was traced; and means for subtracting B from A to derive the integrated optical density of said object.
means for adding the value of the present pixel location Pi(Xi,Yi) to A if Yi<Yj and Pi(Xi-1,Yi) to B if Yi>Yj where Yj is the Y component of (Xj,Yj), the location of the immediately preceding pixel that was traced; and means for subtracting B from A to derive the integrated optical density of said object.
19. The apparatus of claim 11 wherein said generating means further comprises:
means for forming a first representation of said image by segmenting said image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
means for processing the electrical signal of each of said greyscale value to form a second representation of said image;
logic means for converting said second representation into a third representation whereby the value of a pixel at a location (hereinafter:
pixel (X,Y)) in the second representation and the values of the nearest adjacent neighbors of said pixel at said location are converted into a single value at said corresponding location thereinafter:
P(X,Y)) in said third representation;
storage means for storing said third representation; and table means for storing various possible values of P(X,Y), said table means for receiving a value of P(X,Y) and an input direction value, and for producing an output direction value to indicate the next location of P(X,Y) having a non-zero value: said non-zero values of P(X,Y) form the boundary of said object.
means for forming a first representation of said image by segmenting said image into a plurality of pixels and digitizing the image intensity of each pixel into an electrical signal representing the greyscale value;
means for processing the electrical signal of each of said greyscale value to form a second representation of said image;
logic means for converting said second representation into a third representation whereby the value of a pixel at a location (hereinafter:
pixel (X,Y)) in the second representation and the values of the nearest adjacent neighbors of said pixel at said location are converted into a single value at said corresponding location thereinafter:
P(X,Y)) in said third representation;
storage means for storing said third representation; and table means for storing various possible values of P(X,Y), said table means for receiving a value of P(X,Y) and an input direction value, and for producing an output direction value to indicate the next location of P(X,Y) having a non-zero value: said non-zero values of P(X,Y) form the boundary of said object.
20. The apparatus of claim 19 wherein said logic means is adapted to convert said second representation in accordance with the following rules:
(1) If pixel (X,Y)=0, then assign O to P(X,Y);
(2) If pixel (X,Y)=1 and all eight nearest neighbors of pixel (X,Y)=0, then assign 0 to P(X,Y);
(3) If pixel (X,Y)=1 and all four nearest neighbors of pixel (X,Y)=1, then assign 0 to P(X,Y);
(4) Otherwise assign a non-zero value to P(X,Y) wherein said value assigned to P(X,Y) is a number composed of the values of the eight nearest neighbors of pixel (X,Y).
(1) If pixel (X,Y)=0, then assign O to P(X,Y);
(2) If pixel (X,Y)=1 and all eight nearest neighbors of pixel (X,Y)=0, then assign 0 to P(X,Y);
(3) If pixel (X,Y)=1 and all four nearest neighbors of pixel (X,Y)=1, then assign 0 to P(X,Y);
(4) Otherwise assign a non-zero value to P(X,Y) wherein said value assigned to P(X,Y) is a number composed of the values of the eight nearest neighbors of pixel (X,Y).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US8598587A | 1987-08-14 | 1987-08-14 | |
US085,985 | 1987-08-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
CA1328019C true CA1328019C (en) | 1994-03-22 |
Family
ID=22195231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA 574289 Expired - Fee Related CA1328019C (en) | 1987-08-14 | 1988-08-10 | Method and apparatus for generating a plurality of parameters of an object in a field of view |
Country Status (6)
Country | Link |
---|---|
JP (1) | JPS6466778A (en) |
AU (1) | AU2063588A (en) |
CA (1) | CA1328019C (en) |
DE (1) | DE3827312A1 (en) |
FR (1) | FR2620545A1 (en) |
GB (1) | GB2208708A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0678895B2 (en) * | 1991-01-24 | 1994-10-05 | 肇産業株式会社 | Defect discrimination method |
-
1988
- 1988-08-10 AU AU20635/88A patent/AU2063588A/en not_active Abandoned
- 1988-08-10 CA CA 574289 patent/CA1328019C/en not_active Expired - Fee Related
- 1988-08-11 DE DE19883827312 patent/DE3827312A1/en not_active Withdrawn
- 1988-08-11 GB GB8819082A patent/GB2208708A/en not_active Withdrawn
- 1988-08-12 FR FR8810864A patent/FR2620545A1/en not_active Withdrawn
- 1988-08-12 JP JP63201801A patent/JPS6466778A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
FR2620545A1 (en) | 1989-03-17 |
JPS6466778A (en) | 1989-03-13 |
GB2208708A (en) | 1989-04-12 |
GB8819082D0 (en) | 1988-09-14 |
DE3827312A1 (en) | 1989-02-23 |
AU2063588A (en) | 1989-02-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5432865A (en) | Method and apparatus for generating a plurality of parameters of an object in a field of view | |
US4229797A (en) | Method and system for whole picture image processing | |
EP0134976B1 (en) | Method of analyzing particles in a fluid sample | |
US6826311B2 (en) | Hough transform supporting methods and arrangements | |
US5051835A (en) | Digital processing of theatrical film | |
EP1355277A2 (en) | Three-dimensional computer modelling | |
EP0046988A2 (en) | Method for lightness imaging | |
CA2130336A1 (en) | Method and apparatus for rapidly processing data sequences | |
US4641356A (en) | Apparatus and method for implementing dilation and erosion transformations in grayscale image processing | |
CN108765333B (en) | Depth map perfecting method based on depth convolution neural network | |
CN116433559A (en) | Product appearance defect detection method, electronic equipment and storage medium | |
EP0069542B1 (en) | Data processing arrangement | |
CN110728666A (en) | Typing method and system for chronic nasosinusitis based on digital pathological slide | |
CN111915735A (en) | Depth optimization method for three-dimensional structure contour in video | |
CN114821274A (en) | Method and device for identifying state of split and combined indicator | |
CA1328019C (en) | Method and apparatus for generating a plurality of parameters of an object in a field of view | |
CN117710868A (en) | Optimized extraction system and method for real-time video target | |
GB2208728A (en) | Digital processing system with multi data buses | |
CN113283429B (en) | Liquid level meter reading method based on deep convolutional neural network | |
Auborn et al. | Target detection by co-occurrence matrix segmentation and its hardware implementation | |
Brunner et al. | VIPER: a general-purpose digital image-processing system applied to video microscopy | |
JPH0199174A (en) | Shape recognizing device | |
Fletcher et al. | Vidibus: a low-cost, modular bus system for real-time video processing | |
CN118314008A (en) | Dietary nutrient intake evaluation method | |
CN118734966A (en) | Heterogeneous fusion picture real-time reasoning method and device based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MKLA | Lapsed |