GB2086690A - Video image processing system - Google Patents

Video image processing system Download PDF

Info

Publication number
GB2086690A
GB2086690A GB8131096A GB8131096A GB2086690A GB 2086690 A GB2086690 A GB 2086690A GB 8131096 A GB8131096 A GB 8131096A GB 8131096 A GB8131096 A GB 8131096A GB 2086690 A GB2086690 A GB 2086690A
Authority
GB
United Kingdom
Prior art keywords
data
output
frame
input
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB8131096A
Other versions
GB2086690B (en
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micro Consultants Ltd
Original Assignee
Micro Consultants Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micro Consultants Ltd filed Critical Micro Consultants Ltd
Priority to GB8131096A priority Critical patent/GB2086690B/en
Publication of GB2086690A publication Critical patent/GB2086690A/en
Application granted granted Critical
Publication of GB2086690B publication Critical patent/GB2086690B/en
Expired legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects

Abstract

A video processing system includes frame stores 38-47 with common highways 56,57. Input ports 1-6 together with a computer 58 have access to the input highway via buffers 30-37 and the output ports 1-6 together with the computer have access to the output highway via buffers 48-55. Each port together with the computer is allocated sequential time slots of short duration by means of a time slot control 64,68. Thus to an operator it will seem that he has continuous access to the system. Image planes greater than normal frame size can be handled by means of the address control 65. Additional data paths and processing hardware may be provided to enhance the system capabilities. <IMAGE>

Description

SPECIFICATION Video image recording system The invention relates to the manipulation of picture data by digital techniques.
It is known from British patent no.
1,568,378 (U.S. patent 4,148,070) to provide an arrangement for manipulating picture data by storing the data in a frame store and giving random access to this stored video data during the video blanking time to process this data with an algorithm. Fig. 1 shows such an arrangement.
In the video processing system of Fig. 1, a composite video signal is received at input 9 and applied to video input port 12 which separates the sync pulses present on the incoming video. The incoming video is converted from analogue to an 8 bit digital word within input port 12 and the digitized output 13 is applied to a frame store and control 15.
The detected sync pulses on the incoming signal are used within port 12 to provide timing information for clocking the analogue to digital converter and timing information is also provided at output 14 for the frame store and control 15. External syncs (genlock) may be provided at input 11 to provide the necessary timing information if required.
The digitized video samples (picture points) are stored in a large number of memory locations within the frame store and the addresses of these locations are accessed by the store control in time relation to the timing information from the output of video input port 14.
The digital video held in the frame store is read continuously to the input 18 of a video output port 19 which converts the digitized video data to analogue form and adds sync pulses from an internal generator to provide a composite video signal at output 20.
The sync pulses generated also provide timing control for addressing the store locations to read out the stored data. External syncs (read genlock) may be applied to port 19 if required. The composite video can be displayed on a conventional T.V. monitor 22.
The basic conversion, storage and reconversion of the video signals can be modified by means of a computer 24 and computer address and control unit 25. The output 27 of control unit 25 is received by input port 12.
Control unit 25 under computer control can adjust the number of bits in a word to be used (i.e. up to 8 bits) and also decide whether the entire frame is stored. The computer 24 has access via control 25 control data line 27 to the store 15. Computer address information from control 25 is received by input 26 of the store.
The computer therefore is capable of addressing any part of the store, can read out the data and modify the data and re-insert it via video input port 12.
The computer control data line 27 is also connected to output port 19 which control can select for example the field to be displayed and the number of bits to be used.
Any required computer peripheral 23 can be attached to the I/O buses of the computer 24.
Instead of using the computer to modify the data a video processor 28 is provided which contains processing hardware. The processor 28 receives the digitized video from port 12 at input 16 and the digitized video from the store at input 17. After processing, the data from output 29 can be applied to the video input port 12.
The above system is concerned therefore with storing video data in a digital form in a frame store which data is basically Raster in format. This data may be processed under software control or completely new data may be added. Instructions for the addition and processing of data come from the system computer via control 25. The asynchronous nature of the system allows operation over a very wide range of frame rates from standard T.V. through slow scan systems such as electron microscopes to line scan cameras. Nonraster type formats such as spiral and polar scans could be entered via the computer inter- face. The operation of the above system requires timing information which is extracted from the sync information (write genlock), contained in the composite video input.The video information is digitized by converting each picture point to an 8 bit word to give 256 possible levels (e.g. 256 shades of grey).
The digitized data is written into locations, specified by an address, within the frame store. The timing information extracted from the sync information is used to define the address. This timing information gives positional information (start of line, end of field etc) to enable each picture point to be written into the frame store in its correct position.
The frame store used in this known arrange- ment may be of the type disclosed in British patent no. 1,568,379 (U.S. patent no.
4,1 83,058) which comprises sector cards each made up of N channel dynamic MOS R.A.M. integrated circuits. The store structure is related to that of the T.V. raster and may be considered as two cubes. Each cube holds one of the two fields that make up a frame.
Each field consists of 256 lines, each line containing 512 picture points. Each picture point is stored as an 8 bit word to the store has 8 bit planes. Alternate lines of the frame are stored in alternate fields. The two halves of the frame store may be used independently to store two separate frames, the resolution of which will be half the normal. Depending upon the resolution required each field may also store separate pictures (up to 8 separate 1 1 bit resolution pictures). The frame store can accept video at 10 MHz (15 MHz max) sampling frequency and reproduce it for display at the same rate.
The reading process from the store to the display is uninterrupted by any computer demands. For the purpose of reading from or writing into the store the access time of the memory is 67 nSec therefore enabling a standard T.V. picture to be easily accommodated from the 512 samples in a line. The computer can only gain access to the frame store during the line blanking period. The computer has random access and specifies its address in an array form. The selected array may be of any size from single point to the whole store. The array selected may also be in any position within the store. Thus by merely identifying the top left hand corner of the rectangle and the length of the two sides any address area is accessible.Computer data is fed in at a slow rate (typical cycling frequency of the computer is from 500 KHz depending upon whether an array is being addressed or individual picture points) and is buffered to be fed into the frame store at the transfer rate of the system, typically 10 MHz. Thus data is speeded up for writing out into the store and slowed down to read it back to the computer.
The above system could be expanded as explained in the aforementioned patent to provide full RGB colour by the addition of two more frame stores. Such colour pictures could be either entered into the system on a frame sequential basis or alternatively three input video ports could be provided.
The present invention is concerned with a modified processing system capable of providing greater processing power and flexibility than heretofor. The invention is also concerned with a processing system which can modify data at times other than during the blanking interval.
According to the invention there is provided a video image processing system comprising at least one of a data input port and a data output port; a plurality of frame stores each capable of storing data equivalent to a frame of video information; a common input highway for the frame stores and adapted to receive data for storage in the frame stores; a common output highway for the frame stores and adapted to provide data from the frame stores; processing means adapted to have access to at least one of the common input and output highways respectively to effect processing of the data available therefrom: and control means for c#ntroiling the passage of the data to and from the common highways to allow the processing means and the at least one of the data input and output port to independently gain access to the frame stores for a given period on a time multiplex basis.
The invention will now be described by way of example with reference to the accompanying or-awings, in which: Figure 1 shows the known video processing system disclosed in U.S. Patent No.
4,148,070; Figure 2 shows one embodiment of the system of the present invention, Figure 3 shows the time slot arrangement associated with the Fig. 2 system, Figure 4 shows an input buffer with the time slot control in greater detail, Figure 5 shows examples of possible image plane configurations; Figure 6 shows a mechanism for calculating the relevant frame address for a particular location with the image plane; Figure 7 shows the various calculations achieved by the Fig. 5 arrangement; Figure 8 shows the construction of a variable digital delay; Figure 9 shows additional facilities for mon-.
itoring an entire image plane; Figure 10 shows the Fig. 2 arrangement expanded to include additional processing capabilities; Figure 11 shows the system timing interrelationships; Figure 12 shows an example of a Kernal matrix used for image enhancement, Figure 13 shows in detail the operation of those elements of Fig. 10 associated with this spatial filtering enhancement; Figure 14 shows additional processing units between the early and late buses; Figure 15 shows the operation of the bit droppers of Fig. 8 via the control bus, and Figure 16 shows the configuration of the output LUTs to achieve multiple output transfer functions.
The Fig. 2 arrangement shows one configuration for realising the invention using a common highway arrangement.
The incoming video is received from separate input ports 1, 2, 3, 4, 5 and 6 by respective input buffers 30-35. The incoming video can come from any suitable picture sources such as six cameras, synchronized with each other, using Genlock techniques for example. The incoming video information to each port is received by the buffers as 8 bit digital data (having been converted from analogue to digital form as necessary). The buffers operate as input multiplexers by assembling the video data (already in 8 bit digital form) into blocks of 8 picture points (pixels) so as to be in a 64 bit format and passing this data onto the video data highway 56 on a time sharing basis.
The time sharing capability is provided by allocating time slots to the buffers so that each has sequential access to the highway 56 for a period corresponding to 8 picture points in length in this system. In addition to the input buffers 30-35, further buffers 36 and 37 are provided for use by computer 59 via computer interface 58 so that this also has rapid and frequent access to the input high way 56, each buffer 36 and 37 allowing the computer data to be input onto the highway in every period equivalent to 8 picture points.
In practice although buffers 36 and 37 have the capability of dealing with blocks of data of 64 bits, due to the relatively slow data handling capabilities of the computer the information actually passed to input buffers 36 or 37 will be of 1 byte. Thus a single picture point of up to 16 bit definition can be dealt with for example in this joint period. The allocation of the time slots is illustrated in Fig. 3. The ideo data from ports 1 to 6 respectively is allocated time slots 1 to 6 and the computer has the slots 7 and 8 (i.e. for most significant byte C1 and least significant byte C2). The time slot allocation then repeats.Because the 8 picture points pass as a single block of data onto the input highway, although a time slot is equivalent to 8 picture points (data wise) the actual time allocation need only be the equivalent of one picture point, although this could be varied if desired. The time slots are determined by time slot control 64. A plurality of frame stores in the example 10, are also connected to common input highway 56.
Each frame store would typically have sufficient storage for 512 x 512 picture points (pixels) each of 8 bit resolution. Data on this highway can be stored in any of the frame stores 38-47, the store address control 65 allocating a particular store and an address (i.e. the X and Y address) within the store via address highway 66 for a given time slot as illustrated in Fig. 3. In practice, each of the 8 picture points entering a given frame store is allocated to one of 8 store sectors within that frame store to allow the data to be at a rate compatible with frame store handling capabilities as described in U.S. Patent nos.
4,148,070 and 4,183,058. The writing in of data into the system is determined by timing block 61 which produces picture point clocks for time slot and address controls 64 and 65.
The timing is synchronised to incoming data by means of the reference information received by sync separator 60. These syncs can be provided directly from the picture source as Genlock signals, or derived from the sync information available from the incoming composite video in normal manner.
Although 10 frame stores are shown for simplicity they can be expanded as required and can be accommodated on the common highway 56. Because the highway is common to all inputs there need not be a specific relationship between the number of inputs and the number of stores. Up to 256 frame stores could be accommodated with the ar rangement shown.
The data held in the frame stores 38-47 is available for read out for a given time slot onto the common output highway 57 under the influence of the read address given from control 65 via highway 66 shared by the stores. The output buffers 48-55 have access to this data dependent on their allotted time slot determined by control 68. Thus the first time slot is allocated to buffer 48 to hold data on 8 picture points and reformat the data. In other words this buffer operates in the reverse manner to the input buffer. The data from the buffer 48 is provided at output port 1 for display (reconverted into analogue form as required) or for external manipulation. Similar operation takes place for ports 2 to via the remaining buffers. Output buffers 54 and 55 are provided to allow the computer to gain access to stored information via the output highway.These buffers can be given access to stored information originally supplied either from the input ports 1-6 or previously entered from the computer 59 via interface 58 along the common input highway 56. The data fed into the system via computer 59 may have been received by an external peripheral 67, such as a magnetic tape device with slow scan video data thereon. As the data captured by the output buffers 54 and 55 may comprise the blocks of 64 bit data these may be sequentially read out to the computer via interface 58, so as to pass on all of the 64 bit data in that block.
The system of Fig. 2 operates in a synchronous manner, the read and write timing blocks 63 and 61 receiving timing information from separator 60. Delay 62 is provided to introduce a fixed differential in the frame timing between read and write, this being typically equivalent to a period of several picture points, for reasons discussed in more detail below.
The computer interface 58 has timing information supplied to it from blocks 61 and 63 respectively so that it knows when to pass data to the buffers 36 and 37 and when to expect data from output buffers 54 and 55, the interface 58 itself providing a buffering function. Such computer interfacing is well known, see U.S. Patent No. 4,148,070 referred to above for example.
The system shown in Fig. 2 thus allows access on a time sharing basis from a number of sources connected to ports 1 to 6 to allow real time storage in store locations allocated by control 65 and passage of stored data to the output ports. In addition manipulation of video derived data by the computer is also possible, which data may have been entered via the ports 1 to 6 or via the computer. The output ports may also gain access to stored data previously entered or manipulated by the computer so as to allow display of this information. Because of the time slot allocation, a number of independent operations can be produced effectively simultaneously and the data handled in these time slots can be from the same or different pictures.
Such an arrangement allows great flexibility to be achieved as data need not be pre assigned to any particular store because of the common highways but typically can be assigned by the computer 59 under program control via input line 69. The data need not be accessed by the computer for processing only during the blanking interval, as was the operating arrangement in U.S. Patent No.
4,148,070, referred to above; because of the time slot allocation. A specific application would be to use input ports 1 to 3 to handle the Red, Green and Blue data from a colour video source. The Red data for example could be passed to more than one of the stores as desired. Further, the system allows the possibility of expanding the data bit-wise so that 512 x 512 pixels each of 16 bits can be used in place of 8 bit resolution by sharing between two stores each handling 8 of the 16 bits. By using 6 stores it would be possible to handle 3 X 16 bit RGB pictures.
Rather than increasing the number of bits /pixel it is possible to expand the number of picture elements within the normal frame.
Thus using 4 of the frame stores it would be possible to include 1024 x 1024 (instead of 512 X 512) and then choose the desired area within that store for display. The computer itself can have access to the entire 1024 X 1024 locations even if all these are not used for the actual display.
The maximum output frequency for display is 15 MHz which equals 768 pixels/line of 64 ELS for PAL or NTSC T.V. This allows 585 x 768 picture points to be displayed rather than 512 X 512 normally expected (if the normal picture point and line counters were replaced with counters 768 and 585 bits capacity as desired). If a lower frame rate can be accommodated (such as slow scan T.V. with a display having phosphor persistance) then it could display the entire 1024 picture points without exceeding the 15MHz rate.
The construction of the input buffers will now be described in more detail with reference to Fig. 4, which shows one of the input buffers 30 together with the time slot control 64. The buffer includes 8 shift registers 70-77 and 8 registers 78-85. The shift registers each receive 1 bit of the 8 bit data for an incoming pixel, register 70 receiving the most significant through to register 77 receiving the least significant bit. Each shift register is itself of 8 bit storage capability.
Thus 1 bit of 8 consecutive picture points is stored in each shift register under the control of the picture point clocks which are provided in standard manner from the write timing block 61 of Fig. 2. After the receipt of 8 picture points a pulse is provided from the divide by 8 device 87 within control 64. This pulse clocks the data held in the respective shift registers into the respective 8 bit registers 78-85. Further incoming picture points then pass to the shift registers as before. The data in the registers 78-85 constitutes the 64 bit block illustrated in Fig. 3. Each time slot is provided by decoding the binary output of device 87 into 1 to 8 lines so that the generation of time slot 1 therefrom causes a tri-state output enable to allow the data within registers 78 to 85 to be available to the common input highway 56 of Fig. 2.Dependent on operational requirements, the registers 78 to 85 may have more than one 8 bit storage location so that one location can be available to receive data from the shift registers whilst another already holding data can be available for exit to the input highway.
The picture point clocks, the output from divider 87 and one of the respective outputs from decoder 88 are made available to the other buffers 31-37 of Fig. 2. The output buffers 48-55 and the time slot control 68 are of similar construction, although the operation is reversed in that data passes to the registers as a block of 64 bits which is then entered into the shift registers and shifted out picture point by picture point under picture point clock control. The input and output buffers which are dedicated to computer use can be simplified if desired by omitting the shift registers so that input data for example from the computer is passed directly to one of the registers equivalent to 78 to 85 by using a 1 of 8 decoder driven directly by the three least significant bits of the computer address.
In such a configuration interface 58 would also provide a signal to indicate which 1 framestore sector of 8 should be "write ena bled". Similar comments apply to data passing from the output highway 57 to the computer.
Image Planes As already explained, the system of the above form could be constructed to incorporate up to 256 frame stores sharing the common highway using appropriate time slots and addressing.
By producing a system incorporating common highways and frame stores it is possible by manipulating the addressing provided by control 65 to provide additional facilities for varying the storage allocations of data and the portions of that data retrieved for further proc-.
essing or display. Areas which make up a given picture of interest will be referred to as image planes. It is possible to define the image plane to be used as now described to allow areas within specific pictures to be looked at dependent on manipulations carried out within address control 65.
Fig. 5 shows three examples of image planes selected from a possible 256 frame area which could be handled by increasing the number of frame stores from 10 to 256.
These additional frame stores would simply be added to share the input and output highways 56 and 57 and the write/read address output 66 from control 65. Even with the 1O stores shown image planes of a size up to 10 frames can be constructed. A plane could be equivalent to all ten frames down to single frame size.
A large selection of predefined image planes could be stored within address control 65 for example each identified by a number which on entering the identification number would cause the system to select that image plane with the predetermined dimensions. In this example the first image plane comprises a group of 20 stored frames (frames 1 through 20) of the possible 256. In image plane no.
2, the image plane comprises 16 stored frames (frames 21 through 36). In image plane no. 3, 6 stored frames are present (frames 37 through 42). Although the three examples have image planes constituted by data from dissimilar frames, this is not necessary and another image plane could be constructed from say frames 19-22. As already explained, because of the eight time slots, a different selected image can be processed during these respective time slots. The actual frame area accessed for a write or read operation from within the available image plane is shown by the broken lines and in practice would typically be under joystick control.
Movement of the joystick will give rise to an X offset and a Y offset as shown. Thus, considering a read operation, for image plane 1 with the frame area accessed being as shown in the example, then because of the common output highway, data can be retrieved from frame stores 7, 8, 12 and 13 without problem and the frame so constructed will look like a normal frame (i.e. although the frame is constituted from information from several frame stores, it will not be degraded where information from one frame store ceases and another commences. Thus the boundries between frames 7, 8, 12 and 13 need not be present in the displayed image.For the system to operate automatically it is necessary to include a mechanism (e.g. within address control 65) capable of determining from the selected joystick window position the relevant frames and more specifically the desired picture points within the frames to be accessed to construct the desired frame. A way in which the correct addressing can be computed is shown in Fig. 6, which now constitutes the address control 65 of Fig. 2. Its operation will be considered during a read time slot during which it receives read clocks from read timing block 63. The arrangement would operate in a similar manner in a write operation in which case clocks are received from timing block 61.
The display address counting (512 picture points, although this could be expanded if desired for reasons described above) is provided by X and Y counters 160 and 161 respectively. These counters respectively receive the normal picture point and line clocks from the read timing block 63. To allow for possible scrolling an X and Y offset selector 162, 163 respectively is also provided to generate the desired offset dependent on joystick position for that particular time slot. Thus these selectors may comprise RAMs with offset values derived from joystick positions stored therein and one of which is output in dependence on the time slot number (1 of 8).
The maximum offsets for a 256 frame store system (16 > < X 16)wouldbe 8192 pixels. Thus returning to the Fig. 5 example the X offset position has been selected to be equivalent to 620 pixels for image plane no. 1 chosen for the first time slot say. The Y offset selected is equivalent to 800 pixels for the joystick position shown in the first image plane. The X and Y counters 160 and 161 are incremented picture point by picture point such that the resultant address for X and Y respectively is provided by adders 166 and 167 which takes into account the actual count plus the offset.
In this example the X and Y address are 108 and 288 respectively and are identified by J and K in the image plane no. 1 of Fig. 5.
Thus this address for X and Y is made available to the address highway 66 for that time slot. In addition the frame number is required for which that particular address is to be used and its computation is now explained, with reference to Fig. 6 and Fig. 7.
As already mentioned, the operator can define a list of image plane configurations, examples of which are shown in Fig. 5 and he can choose to use a particular image plane for a given time slot which plane is identified by its image plane number. The number of the image plane selected for each time slot is entered into plane store 165, which number could simply be selected using standard digital thumbwheel switches for example or conveniently using the computer keyboard. The identity of each image plane for each of the time slots will be output in turn under the control of block 68 of Fig. 2. This data made available from the image plane number store 165, which, dependent on the image plane selected, will be used to access frame data from X length store 168, Y length store 174 and offset store 172, each comprising a look up table.These look up tables may constitute a pre-programmed ROM or a RAM loaded with image plane data via computer 59.
Frame quantity X length memory 168 holds data on each X length for the respective image planes available in the system and in dependence on the identity data received will output the desired length (identified as output B B in the list in Fig. 7 and also in Fig. 6). For image plane No. 1 it equals 5 frame stores.
Similarly the Y length memory 174 provides the number of frames for that particular image plane. Thus for image plane no. 1 this equals 4 4 frames (see output C in Figs. 6 and 7).
In practice the outputs A to C of Fig. 7 are fixed for a given image plane, with the offsets (D and E) being dependent on the joystick position or some other means for generating the relative quantity used to access the selectors 162 and 163. The computation for the X and Y address (see J and K) will be made for each picture point following increments of the counters but in the table of Fig. 7, only the address start for image plane no. 1 is shown with counters 160 and 161 being zero (see F and G). Similarly with image planes no. 2 and 3 the start is shown, but for plane no. 3 the end of the frame calculation is also shown to illustrate the blanking aspect described later.
Returning to image plane no. 1, the desired frame number is identified using adders 171 and 173 and multiplier 169. The adder 166 produces two outputs. The output H produces the quantity of whole frames resulting from the summation whilst output J is the 'remainder' of picture points from this sum. In this example the number of whole frames is 1 and the remainder is 108 pixels which defines the X X address as already explained. A similar situation occurs for the Y adder 167 giving a resulting 1 for I and 288 for K.
This former output from adder 167 is received by multiplier 169 which multiplies this number with the output from store 168 and the result is used in adder 171 to which the whole frame X quantity (H) is added before receipt by further adder 173.
In adder 173 the frame number offset (A) read out from memory 172 is added to the output from adder 171. This addition step gives the frame store number (L) for which the address is to be associated (in this case frame store number 7 as shown in Fig. 5). This computation is effected for each time slot and the address is incremented accordingly.
Whilst the above blocks are all that are needed to calculate the desired locations, where the joystick or equivalent control is allowed to move partially outside the image plane then additional elements are used to provide blanking as necessary (see Y length memory 174, X limit block 175, Y limit block 187 and OR gate 177 with outputs C, M, N and P respectively). The output-(C) of frame quantity X length memory 174 is used for comparison with the frame quantity output (I) of adder 167 to detect in checker 176 whether this former value remains larger than I I and greater or equal to zero using standard logic techniques.
Similarly the output B of frame quantity Y length memory 168 is compared with the H output of adder 166 to determine in checker 1 75 whether the value of B remains greater than H and greater or equal to zero.
As shown in the last column of Fig. 7, the computation of frame 45 (see L) is achieved, but due to the presence of OR gate 177, the output P provides blanking during this interval. (It is clear from the illustration in Fig. 5 that blanking will also occur during frame 44 resulting from calculations on the pertinent pixels within the desired frame.).
Thus having chosen the desired image plane, the relevant data is accessed from the frame stores and made available on the common output highway 57 for receipt by the relevant output buffer 48 through 55. Similar comments apply to the write operation using highway 56. The Fig. 6 arrangement would then use clocks from write timing block 61 and time slots from control 64. Such an arrangement is still flexible enough when required to allow single frames to be dealt with.
For example, by defining image plane numbers 50 to 57 (say) to relate to single image stores 1 to 10 (i.e. stores 38 to 47 of Fig. 2) then by choosing image plane number 50 for the first time slot for example then input buffer 30 would have access to store 38 alone if no offset is introduced via generators 162 and 163. For computer time slots 7 and 8 the display counters 160 and 161 are expediently disabled as shown so that the X and Y offsets themselves pass without modification through adders 1 66 and 167 so that these offsets defined by the computer in known manner effectively form the particular location to be accessed and in this manner the computer accesses the frame stores via the image plane mapping control 65.
As each time slot is independent, up to 6 operators (using the embodiment of Fig. 2) could be accommodated. Not only do they have the ability to share the frame storage facilities but they can use the same data as well if required. If colour operation is required then two operators could use 3 ports each on a a RGB basis.
The system described with relation to Fig. 6 has the capability of choosing the window under joystick control to an accuracy of a single picture point. In practice because the data being passed to and from the stores is in blocks of 64 bits, this limits the window in the X direction to the accuracy of the block (i.e. 8 picture points). Where there is a requirement to achieve offset control to a single picture point, then it is necessary to provide a variable delay dependent on the offset required in order to cause the correct initial picture point to be the first in a given block of 8. A device capable of providing this operation is shown in Fig. 8. This device comprising 8 X 8 bit RAM 178, N bit counter 179 and register 180 forms a digital delay which would be provided in each of the input buffers and output buffers for example. Thus considering the buffer 30 of Fig. 4 the RAM 178 would be connected between input port 1 and shift registers 70-77 so that 8 bit incoming video data passes to the buffer shift register via the RAM. The RAM will have 8 possible address locations each for storing one 8 bit word for a given picture point. The address iS defined by the output of N bit counter 179 which output depends on the picture point clocks received. In the case where N = 8 then considering eight Read/Write operations, in the first operation no data is available for read out but the first picture point is written into the first RAM location during the write portion of the cycle. This process continues under clock control until all the 8 locations have data on 1 picture point and then the counter 179 resets and the RAM is addressed at its first location once again.This time location 1 provides the first picture point for read out and receives the 9th incoming picture point for storage during this Read/Write operation and this process continues until all picture points 9-16 are sequentially stored, points 1 to 8 having been sequentially read out. Thus it can be seen that the RAM provides a delay period equivalent to 8 picture points. Varying the value of N causes the delay to be varied.
When N = 3, the counter only counts 3 incoming picture point clocks before resetting so that the delay period before the addressing returns to a given location is reduced from 8 to 3 picture point clocks. Thus the first picture point goes in location 1, the second and third into their respective locations, but then the counter resets and the first picture point is read out and the first location now receives the fourth picture point and so on. Choosing a value of N defines the delay required and for a given time slot this value effectively controls the offset for that buffer. The value of N for the given time slot associated with that buffer may be stored in register 180. In practice it is convenient to make use of the output of X offset block 162 to define the value of N for a given time slot.The 3 least significant bits of the offset provided by block 62 corresponds directly to the value of N up to 8 and the separate register 180 can be omitted if desired.
A A similar process operates for the output buffers and would be provided between the shift registers (within the buffer) and the output port for example.
The powerful and flexible handling capabilities of the Fig. 2 arrangement using time slots and common highways will now be illustrated by way of example with reference to the Fig.
9 configuration which shows a digital video compressor 181 connected between the output port 1 and the input port 2. A first monitor 182 is connected to output port 2 and a second monitor 183 is connected to output port 3. The address control 65 is influenced by the joystick 184 as described above. Although a joystick has been used this may be substituted by an equivalent manually controlled device e.g. tracker-ball or touch tablet. Only part of the highways and part of the total number of frame stores are included for simplicity. Assuming that the system stores hold data previously entered say via computer 59 which relates to slow scan video information and encompasses an area equivalent to a number of frames.Choosing a given image plane illustrated as constituting frames 1 to 9, then this entire image plane can be systematically accessed and passed via the output buffer 48 and the first output port to the compressor 181 during the first output time slots.
The compressor 181 is effectively a zoom down device which electronically reduces the size of the image plane which in this example is shown as 3 X 3 frames. Such techniques of size reduction are already known from U.S.
Patent No. 4,163,249 and in this instance it will require a fixed degree of compression for a particular selected image plane. Larger image planes (see image plane no. 1 of Fig. 5) will have to receive a greater degree of compression than smaller image planes. The required degree can be selected from a look up table in the same way as the elements within the Fig. 6 arrangement responding to the output of block 165 dependent on the selected image plane number. The resultant frame reduced from the 9 frame matrix as it is generated is passed into an empty frame store via input port 2 during the second input time slots.
The stored compressed image can be monitored by gaining access to this during the second output time slots and passing this via buffer 49 and the second output port to the monitor 82.
On viewing the compressed image of that image plane on monitor A, this is seen to contain a tree. By moving the joystick so as to encompass this portion of the image plane of interest it is possible to monitor this tree at full size using the monitor 183. This is possible because the output buffer 50 for the third time slots can be given access to the same data available in the first time slots, as data from the frame stores is not destructively read out and remains in store unless written over.
Although the frame is constructed from data from parts of frames 4, 5, 7 and 8 from different stores the resultant image is not degraded due to the common highway configuration.
Although the inputs from the buffers are shown as being directly received by the monitors, in practice these will typically pass via digital to analogue converters and via processer amplifiers where the necessary synchronising signals will be added. Normally the frame numbers within the image plane and the boundaries on monitor A as the system makes these transparent to the operator need not be displayed. In practice the frame numbers and the frame boundaries on monitor B would typically not be shown, but purely an image transparent to any frame edges would be shown. As the joystick is moved the image on monitor B will pan or tilt whilst that on monitor A will be fixed. If the border defining the selected window is provided this will change in sympathy with the image on monitor B under joystick control. Such a border can be generated using standard curser generator techniques.Where an operator is dealing with colour information, additional ports can be brought into play to assemble the desired image.
Additional Capabilities Although the processing provided by compressor 181 is shown routed actually from an output port and back to an input port of the system this and other processing can be accommodated within the system itself when the Fig. 2 arrangement is suitably modified.
To illustrate this Fig. 10 shows an arrangement based on Fig. 2 but expanded to include additional manipulation capabilities. Only one of the input and output ports together with associated components is shown for simplicity as is only a single frame store associated with the common highways 56 and 57.
Composite analogue video information is received at the first input port and is passed to analogue to digital converter 201. The incoming video is also received by sync separator 60 which passes the sync information for use by the store address and control block for and by elements 61-68 of Fig. 2 which provides store addressing as described above for each of the frame stores (only frame store 38 is shown). The addressing can be controlled when desired both by the image plane mapping selected for a given time slot and the joystick control 184 in the manner already described.
The timing highway available from control 61-68 to computer interface 56, high speed processor 232 and digital to analogue converter 221 synchronises the system operation.
The digital video from converter 201 instead of passing direct to input multiplexing buffer 30 passes via recursive processor 209 constituted by look up table 210, arithmetic and logic unit 211 and look up table 212. However, when desired, by suitable setting of the LUT and ALU the path will be transparent to the data to correspond to the Fig. 2 situation.
Data from multiplexer 30 passes to the common input highway 56. Data read out from the frame stores passes from the common highway 57 to output multiplexer (buffer) 48 and thence via LUT 220 to the digital to analogue converter (DAC) 221. Syncs etc will be added after conversion as necessary to provide composite video.
In practice ADC 201, processor 209, LUT 220 and converter 121 would typically be repeated six times to provide the 6 input and output ports. Each of the six ADC's 201 can provide data onto any of the six early buses 238.
The computer has access to enter and retrieve video data as before under the control of the multiplexer 36, 54 providing input and output buffering respectively. A number of LUT map blocks 225-230 are provided as is control bit dropper 231 for reasons described below. A high speed processor 232 is also shown. The system includes several buses viz late bus 337, early bus 238, control bus 236 and M-bus 235 which can be bi-directional in nature to allow interaction between the various system blocks to which they are linked.
M-bus This bidirectional bus is used to allow data to pass to and from the various blocks at computer rate (e.g. 50 KHz per picture point) or high speed processor rate (e.g. 1 MHz per p.p.). Thus data manipulated by the computer 59 can pass to and from this bus via interface 56. Non video data from the computer can also use this bus, for example to select a given image plane or store address by feeding the instructions to control 61-68, and for controlling the LUTs. Thus this manipulation bus (M-bus) allows additional system flexibility.
Control Bus This bus is provided to allow data stored within the frame stores to be used as control data to influence system operation. This is achieved by passing data to control bit dropper 231 to generate the necessary instructions for the LUT map blocks 225-230.
Early and Late Buses Although the system incorporates multiple input and output video highways 56 and 57 which allows interaction between channels when the data is in digital form, by adding two other video buses 237, 238 as shown still greater manipulating power is achieved.
These buses have a fixed time relationship respectively to the store input and output timings and can be used for recursive video functions. These buses are bi-directional allowing both feedback and feedforward operation.
As already mentioned there is a fixed delay between the commencement of a store read cycle and the commencement of a frame store write cycle determined by the delay block 62 of Fig. 2. This situation is illustrated in the timing diagram of Fig. 11 where the start of a read cycle is represented by wave form (f) and the write cycle be waveform (e). This fixed delay may be the equivalent of 15 picture point clocks for example. The start of the incoming video is represented by waveform (a) and this will be delayed by its passage through ADC 201 as represented by waveform (b). This waveform also represents the timing on the early bus so that both incoming and previously stored data can have the same timing relationship.The late bus timing is represented by waveform (c) and the processor 109 and LUT 220 each will be provided with an inherent delay equivalent to the differ- ence between the early and late bus timing so as to allow data passage without interference or degradation. The delay produced by the passage of data through DAC 221 is represented by waveform (d).
The provision of the early bus allows data to be passed back and handled by the processor 209 as though it was incoming data from an input port. The processor is shown as a recursive processor, but could comprise other types of hardware providing fixed function video rate processing (e.g. 10 MHz per picture point).
An example of recursive operation is now described with respect to spatial filtering.
a) Spatial filtering The requirement of spatial filtering is to use data from adjacent picture points to compute the picture point of interest so as to result in an enhanced picture. Such filtering can be effected on part or all of the frame. In the Fig.
12 example the spatial filtering is shown as being selected to comprise a matrix of 9 picture points P1 through Pg. To enhance point Psj it will require data from the 8 surrounding picture points within the Kernel matrix. Because the system has access to several frame stores simultaneously with independent addresses it is possible to use recursive processor 209 of Fig. 10 to evaluate the matrix multiplication very rapidly as shown by Fig. 13 which constitutes the relevant circuit portions transposed from Fig. 10.Thus by scrolling the frame by 1 picture point relative to another it is possible to read out data from the relevant frame store, pass it via the early bus to the processore 209 so that it receives picture point N to effect the desired manipulation where it is multiplied by coefficient KN selected via the M-Bus from the computer.
The result passes via buffer 37 to the stores for storage in an available location and for subsequent read out during later steps. So the steps continue until the entire frame (if required) is enhanced.
For the particular matrix shown the computation would proceed as 9 steps as follows where A = original picture data B = computed data at each step Step No.
1 B, = K1 X A(x-1y-1) 2 B2 = B1 + K2 X A(,,y#1) 3 B3 = B2 + K3 X A(x+l.y#l) 4 B4 = B3 + K4 X A(x#l.y) 5 5 B5 = B4 + K5 X A(x,y) 6 B6 = B5 + K6 x A(x+l.y) 7 B7 = B6 + K7 X A(x#y.y+l) 8 B8 = B, K8 X A(x.y+l) 9 B9 = K9+Ax+1.y+1#=reSuk Thus the first step in processor 209 uses only data at input A but the subsequent steps use data from previous calculations used as input B for summation therein with the other data. Central picture point P5 in the matrix is designated as being at location x,y.
The computation of an entire frame will take 9 frame time periods (0.36 sec). Similarly a 5 x 5 selected matrix will take 1 second and a 9 X 9 matrix will require 3.24 secs. This is a considerable improvement on known computer controlled systems which can take several minutes to compute such enhancement.
By inclusion of late and early bus structure, any desired processing function can be placed between the two as illustrated by the recursive processor 209.
In practice the processor 209 can be placed anywhere between the early and late buses as now shown in Fig. 14. A bypass block 239 is then needed to cope with the fixed delay period between the early and late timing. The bypass can simply use RAM techniques for delaying the data by a fixed period equivalent to the picture point clocks between the two bus timings. RAM delay techniques are already described with relation to Fig. 8. Other hardware processors can also be coupled between the buses as illustrated by the real time spatial filter 240 and the multiplier/accummulator 241. A given processor will have an inherent delay dependent on how it manipulates the data received. If the inherent delay is less than the delay relative to the buses, additional delay using a ram can be provided for that processor so that it is compatible with the system. This also applies to LUT 220.
It can thus be seen that it is expedient to have a fixed time delay between store read and write cycles of sufficient magnitude to allow the early and late bus timing to accommodate any desired processor whilst maintaining rapid operation.
b) Feedfonward to display By using the early bus as a feedforward device it is possible (see Fig. 14) to take the incoming data and make it available directly to display without intermediate frame storage.
The LUT 220 if required can effect output processing on the data prior to receipt by the DAC and such processing could include gamma correction, transformation, graphical overlay and windows. If the processed data is required for retention it can be passed via the late bus back to the frame stores after receipt by the input multiplexer 30 for example.
The late bus allows the results from the recursive processor 209 to be fed forward to monitor the results. The processor 209 is capable of effecting other calculations such as subtraction to operate as a movement detector for example and the result displayed via the late bus. Any data required for storage can be retained by means of the late bus operating in reverse to pass data back to input multiplexer 30.
c) Self-Test The buses can also be used to provide a self test capability. A selected programme will generate data for storage in one frame store and commands the output processor 220 and allows the resultant data to be fed back via the late bus to another frame store. The actual result is compared with the expected result to see if any errors have arisen. In a similar manner the early bus can be used to simulate the ADC output to check the integrity of the input processor 209 and associated elements for the input side of the system.
Instruction Data using the Control Bus The system as described in Fig. 10 has the capability of using the integral frame storage to accommodate system instructions as though it were normal data. The instruction data may switch the entire system into different states on a pixel by pixel basis using the control bus. The data from the output buffer 48 is available to the control bus via bit dropper 231. The bit dropper selects which bits from a particular buffer is made available to this bus.
The instruction data is used to select a particular map function from map LUT 225-230 dependent on the data content on the control bus.
In practice as 8 bits are required for the control bus then 8 droppers will also be used as now shown in Fig. 15. The 8 bit data from each of the output buffers 48-53 is received by each bit dropper 231A-231 H. With 6 output buffers the total bits available at each dropper is 48. The bits selected are under the control of the computer via the M-Bus 235.
The single bit from each dropper provides an 8 bit output for passing to the video rate control bus 236 where it is received by map block 225-230 which provides the selected function in dependence on the bus data. Examples of the control functions are input write protect (via map block 227), recursive video processing (via map block 226), output window, cursor or overlay (via map block 230) or video path switching (via map blocks 227 and 229) or transformation control (via LUT block 220). A given function can change on a picture point by picture point basis if required.
Area Switching for Output Transformation As already mentioned one of the frame stores could be used to define particular areas on the image as displayed on the monitor and this data can be used as controlling information rather than normal video data. This stored data itself will be used to control the look up tables so that different functions will be achieved by the look up tables dependent on the particular area being shown (see Fig. 16).
In practice up to 16 different transfer functions could be selected by using 4096 x 8 LUTs and 4 control bus bits.
Graphic Store By expanding the look up table 220 at the output as described above and loading constants into all the transfer functions except 0 then any combination of desired graphics overlay can be accommodated. The output will be switched to a particular colour as defined by the LUT whenever the graphics overlay is required. Since the control bus may receive data from any video store then the video store effectively becomes a graphic store using the control bus. Thus there is no longer any need for a dedicated graphic store.
High Speed Processor The high speed processor 232 of Fig. 10 is included to allow real or near real time processing. Such a processor may comprise a programmable bit slice microprocessor. Unlike previously disclosed processors in the video art this processor is programmable and can be used to manipulate (image process) the data in the stores and also control the system in a similar manner to the host computer 59 but much faster (typically 1 MHz per picture point). It can be programmed by means of the host computer 59. This high speed processor can be used for rapidly effecting fast Fourier Transforms, image rotation or intensity histograms for example.
It can be seen that the system described allows great flexibility and power of processing. Because of the time slot allocation, the system is not restricted to a single in/out port with access by the computer only during the blanking interval. As the time slots are of short duration this allows rapid sequencing and the system can thus handle real time data from a number of ports and effectively continual access by the computer if desired. Thus to an operator it will seem that he has continuous access to the system even though in practice this access is sequential rapid access as far as the highways are concerned.

Claims (30)

1. A video image processing system comprising: at least one of a data input port and a data output port; a plurality of frame stores each capable of storing data equivalent to a frame of video information; a common input highway for the frame stores and adapted to receive data for storage in the frame stores; a common output highway for the frame stores and adapted to provide data from the frame stores; processing means adapted to have access to at least one of the common input and output highways respectively to effect processing of the data available therefrom; and control means for controlling the passage of the data to and from the common highways to allow the processing means and the at least one of the data input and output port to independently gain access to the frame stores for a given period on a time multiplex basis.
2. A system as claimed in claim 1, wherein the control means is adapted to allow independent access to the highways for a time period equivalent to at least one picture point.
3. A system as claimed in claim 1 or 2, including multiple input ports, multiple output ports and wherein said control means is adapted to effect time multiplexing of the processing means and each of the input and output ports respectively to the input and output data highways.
4. A system as claimed in claim 3, wherein the input ports each include a data buffer for holding input data prior to receipt by the common input highway during a designated time period and the output ports each include a data buffer for holding output data following receipt from the common output highway during a designated time period.
5. A system as claimed in claim 4, wherein the input data buffers are adapted to assemble incoming data into data blocks equivalent to a predetermined number of picture points prior to receipt by the input highway.
6. A system as claimed in any one of claims 1 to 5, including accessing means for allowing data to or from the frame stores to be handled so as to define at least one image plane comprised of a plurality of picture points in excess of that available from a single video frame.
7. A system as claimed in claim 6, wherein the accessing means includes a memory for holding positional information on an image plane so as to identify the one or more frame stores required to be accessed and their relative frame positions for a chosen image plane and for determining the locations of the desired picture points within a given frame on a a picture point by picture point basis.
8. A system as claimed in claim 7, wherein the accessing means includes an offset generator for producing an offset to modify the area selected from within the image plane.
9. A system as claimed in claim 7, wherein the offset generator is controlled by manually operable means.
10. A system as claimed in claim 8 or 9 wherein adjustable delay means are provided to delay the passage of data for a period dependent on the offset generated.
11. A system as claimed in any one of claims 7 to 10, wherein a blanking device is provided to prevent degradation of the video data whenever the offset causes the accessed image to move outside the selected image plane.
12. A system as claimed in any one of claims 7 to 11, including compression means adapted to receive data available from the common output highway during first multiplexed periods for allowing a reduced size image representative of a selected first image plane to be reproduced into a second image plane for subsequent display during second time multiplexed periods.
13. A system as claimed in claim 12, wherein the accessing means is adapted to route data corresponding to an accessed area within the selected image plane for display separately but concurrently with the reduced size image of the entire images plane and for incorporating a border within the compressed image corresponding to the accessed area.
14. A system as claimed in any one of claims 1 to 13, wherein the control means includes a fixed delay device to provide a fixed delay between the commencement of the store reading and writing operations whereby the system operates synchronously but with a fixed time differential.
15. A system as claimed in claim 14, wherein the delay device is adapted to provide a a delay equivalent to a fixed number of picture point clock periods.
16. A system as claimed in claim 14 or 15, wherein a first and second bidirectional bus is provided having a fixed time relationship relative to the store system write cycles and read cycles respectively to allow additional access to the data for further manipulation, for feeding backwards for additional storage or for feeding forwards without intermediate frame storage.
17. A system as claimed in claim 16, wherein the number of first and second bidirectional buses provided corresponds to the number of input or output ports.
1 8. A system as claimed in claim 16 or 17, including at least one picture point processor connected between the first and second buses so as to gain access to both incoming data and previously stored data to produce a compatibly timed output dependent on data received from at least one of the buses.
19. A system as claimed in claim 18, wherein the processor is adapted to have access to the system output via the buses to allow the processed output to be passed to the system output without intermediate frame storage.
20. A system as claimed in claim 18 or 19, wherein the at least one processor is adapted to provide recursive processing of previously stored data with incoming data having the same timing relationship to effect temporal filtering.
21. A system as claimed in claim 18, wherein the at least one processor is adapted to manipulate data from adjacent picture points within a field to effect spatial filtering.
22. A system as claimed in claim 18 or 19, wherein the at least one processor is adapted to operate as a multiplier.
23. A system as claimed in claim 18, wherein the at least one processor comprises a look up table for effecting a transfer function dependent on the data received at its input.
24. A system as claimed in claim 23, wherein the look up table is adapted to allow any frame store to operate as a graphics store.
25. A system as claimed in any one of claims 16 to 24, wherein a bypass device is provided between the first and second buses to allow retimed data to pass therebetween.
26. A system as claimed in any one of claims 1 to 25, including a control bus for receiving control data and control selector means for receiving video or instructional data previously stored in the frame stores to provide data so as to effect control of the system processing via the control bus in dependence on the data received.
27. A system as claimed in claim 26, wherein the control selector means include a plurality of bit selectors and a look up table for selecting a specified function.
28. A system as claimed in any one of claims 1 to 27, including a high speed processor and wherein a manipulating data bus is provided to allow the processing means and the high speed processor to independently gain access to data over a time -multiplex basis.
29. A system as claimed in claim 28, wherein the high speed processor comprises a programmable bit slice microprocessor.
30. A video image processing system substantially as described herein with reference to the accompanying drawings.
GB8131096A 1980-10-17 1981-10-15 Video image processing system Expired GB2086690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB8131096A GB2086690B (en) 1980-10-17 1981-10-15 Video image processing system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB8033536 1980-10-17
GB8131096A GB2086690B (en) 1980-10-17 1981-10-15 Video image processing system

Publications (2)

Publication Number Publication Date
GB2086690A true GB2086690A (en) 1982-05-12
GB2086690B GB2086690B (en) 1984-04-26

Family

ID=26277244

Family Applications (1)

Application Number Title Priority Date Filing Date
GB8131096A Expired GB2086690B (en) 1980-10-17 1981-10-15 Video image processing system

Country Status (1)

Country Link
GB (1) GB2086690B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0114203A2 (en) * 1982-10-29 1984-08-01 Kabushiki Kaisha Toshiba An image processor
GB2139846A (en) * 1983-05-09 1984-11-14 Dainippon Screen Mfg Magnifying or otherwise distorting selected portions of scanned image
EP0154300A2 (en) * 1984-02-28 1985-09-11 Mitsubishi Denki Kabushiki Kaisha Signal processing device for digital signal processing and multiplexing
EP0169709A2 (en) * 1984-07-20 1986-01-29 Nec Corporation Real time processor for video signals
GB2169170A (en) * 1984-12-20 1986-07-02 Racal Data Communications Inc Video data transfer between a real-time video controller and a digital processor
FR2603122A1 (en) * 1986-08-25 1988-02-26 Intertechnique Sa Digital processing of images for earth observation satellites - uses large capacity semiconductor memory to replace hard disc devices or bulk storage in real-time processing of satellite images
FR2637143A1 (en) * 1988-09-27 1990-03-30 Allen Bradley Co VIDEO IMAGE STORAGE DEVICE
FR2641658A1 (en) * 1989-01-10 1990-07-13 Broadcast Television Syst CONTROL SIGNAL GENERATOR FOR A VIDEO SIGNAL MIXER
DE4231158A1 (en) * 1991-09-17 1993-03-18 Hitachi Ltd Image combination and display system - stores video signals of still and moving images and combines them for display

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0114203A2 (en) * 1982-10-29 1984-08-01 Kabushiki Kaisha Toshiba An image processor
EP0114203A3 (en) * 1982-10-29 1986-08-13 Kabushiki Kaisha Toshiba Processor and method of processing image data
US4713789A (en) * 1982-10-29 1987-12-15 Tokyo Shibaura Denki Kabushiki Kaisha Processor and method of processing image data
GB2139846A (en) * 1983-05-09 1984-11-14 Dainippon Screen Mfg Magnifying or otherwise distorting selected portions of scanned image
US4672464A (en) * 1983-05-09 1987-06-09 Dainippon Screen Mfg. Co., Ltd. Method and system for recording a partially distorted image
EP0154300A3 (en) * 1984-02-28 1988-10-12 Mitsubishi Denki Kabushiki Kaisha Signal processing device for digital signal processing and multiplexing
EP0154300A2 (en) * 1984-02-28 1985-09-11 Mitsubishi Denki Kabushiki Kaisha Signal processing device for digital signal processing and multiplexing
EP0169709A2 (en) * 1984-07-20 1986-01-29 Nec Corporation Real time processor for video signals
EP0169709A3 (en) * 1984-07-20 1987-05-13 Nec Corporation Real time processor for video signals
GB2169170A (en) * 1984-12-20 1986-07-02 Racal Data Communications Inc Video data transfer between a real-time video controller and a digital processor
FR2603122A1 (en) * 1986-08-25 1988-02-26 Intertechnique Sa Digital processing of images for earth observation satellites - uses large capacity semiconductor memory to replace hard disc devices or bulk storage in real-time processing of satellite images
FR2637143A1 (en) * 1988-09-27 1990-03-30 Allen Bradley Co VIDEO IMAGE STORAGE DEVICE
FR2641658A1 (en) * 1989-01-10 1990-07-13 Broadcast Television Syst CONTROL SIGNAL GENERATOR FOR A VIDEO SIGNAL MIXER
DE4231158A1 (en) * 1991-09-17 1993-03-18 Hitachi Ltd Image combination and display system - stores video signals of still and moving images and combines them for display
US5519449A (en) * 1991-09-17 1996-05-21 Hitachi, Ltd. Image composing and displaying method and apparatus for displaying a composite image of video signals and computer graphics
DE4231158C5 (en) * 1991-09-17 2006-09-28 Hitachi, Ltd. Method and device for the composition and display of images

Also Published As

Publication number Publication date
GB2086690B (en) 1984-04-26

Similar Documents

Publication Publication Date Title
US4485402A (en) Video image processing system
US5642498A (en) System for simultaneous display of multiple video windows on a display device
US5805148A (en) Multistandard video and graphics, high definition display system and method
US5644364A (en) Media pipeline with multichannel video processing and playback
EP0264726B1 (en) Picture transformation memory
US4148070A (en) Video processing system
US5315692A (en) Multiple object pipeline display system
CA2167689C (en) A data display apparatus for displaying digital samples of signal data on a bit mapped display system
US5124692A (en) Method and apparatus for providing rotation of digital image data
US5313275A (en) Chroma processor including a look-up table or memory
US4831447A (en) Method and apparatus for anti-aliasing an image boundary during video special effects
JPH087567B2 (en) Image display device
US4689681A (en) Television special effects system
EP0558208A2 (en) Digital video editing processing unit
US6362854B1 (en) Effecting video transitions between video streams with a border
US4918526A (en) Apparatus and method for video signal image processing under control of a data processing system
US4339803A (en) Video frame store and real time processing system
GB2086690A (en) Video image processing system
US5818434A (en) Method and apparatus for controlling image display
KR970002146B1 (en) Video effects system with recirculation video combine and output combine
EP0122094B1 (en) Electronic still store with high speed sorting and method of operation
EP0705517B1 (en) Media pipeline with multichannel video processing and playback
US4682227A (en) Digital video signal processing apparatus for providing zoom and compressive images with soft focus
JP4083849B2 (en) Image processing method
GB2117209A (en) Video processing systems

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee
732 Registration of transactions, instruments or events in the register (sect. 32/1977)