EP2321819A1 - Filtre vidéo numérique et traitement d'image - Google Patents

Filtre vidéo numérique et traitement d'image

Info

Publication number
EP2321819A1
EP2321819A1 EP08815721A EP08815721A EP2321819A1 EP 2321819 A1 EP2321819 A1 EP 2321819A1 EP 08815721 A EP08815721 A EP 08815721A EP 08815721 A EP08815721 A EP 08815721A EP 2321819 A1 EP2321819 A1 EP 2321819A1
Authority
EP
European Patent Office
Prior art keywords
point
color
pixel
fifo
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP08815721A
Other languages
German (de)
English (en)
Other versions
EP2321819A4 (fr
Inventor
Ned M. Ahdoot
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP2321819A1 publication Critical patent/EP2321819A1/fr
Publication of EP2321819A4 publication Critical patent/EP2321819A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters

Definitions

  • the present invention relates to the digital video colored filtering, and image processing consisting of hardware and software, and more particularly to the art of image recognition, image identification, and image tracking.
  • the present invention relates to the efficient filtering of colored video images, thus eliminating the need for use of complex Fourier Transforms.
  • Fourier Transforms by their nature slow down the digital image processing.
  • it utilizes a unique computer architecture that resembles a typical car assembly lines to identify emergence, disappearance, and directional and rotational changes of multicolored objects in a six-degree of freedom of space.
  • Gindele; Edward B. U.S. 20050089240 discloses a method of processing a digital image to improve tone scale, includes the steps of: generating a multiresolution image representation of the digital image including a plurality of base digital images and a plurality of residual digital images; applying a texture reducing spatial filter to the base digital images to produce texture reduced base digital images; combining the texture reduced base digital images and the residual digital images s to generate a texture reduced digital image; subtracting the texture reduced digital image from the digital image to produce a texture digital image; applying a compressive tone scale function to the texture reduced digital image to produce a tone scale adjusted digital image having a compressed tone scale in at least a portion of the image; and combining the texture digital image with the tone scale adjusted digital image to produce a processed digital image, whereby the contrast of the digital image is improved without compressing the contrast of the texture in the digital image.
  • Srinivasan; Sridhar U.S. 20030194009 discloses various techniques and tools for approximate bicubic filtering are described. For example, during motion estimation and compensation, a video encoder uses approximate bicubic filtering when computing pixel values at quarter-pixel positions in reference video frames. Or, during motion compensation, a video decoder uses approximate bicubic filtering when computing pixel values at quarter-pixel positions.
  • a graphics system comprises a graphics processor, a sample buffer, and a sample-to-pixel calculation unit.
  • the graphics processor generates samples in response to received stream of graphics data.
  • the sample buffer may be configured to store the samples.
  • the sample-to-pixel calculation unit is programmable to generate a plurality of output pixels by filtering the rendered samples using a filter. A filter having negative lobes may be used.
  • the graphics system computes a negativity value for a first frame.
  • the negativity value measures an amount of pixel negativity in the first frame.
  • the graphics systems adjusts the filter function and/or filter support in order to reduce the negativity value for subsequent frames.
  • Debes; Eric U.S. 7,085,795 discloses an apparatus and method for efficient filtering and convolution of content data are described.
  • the method includes organizing, in response to executing a data shuffle instruction, a selected portion of data within a destination data storage device.
  • the portion of data is organized according to an arrangement of coefficients within a coefficient data storage device.
  • a plurality of summed-product pairs are generated in response to executing a multiply-accumulate instruction.
  • the pluralities of product pairs are formed by multiplying data within the destination data storage device and coefficients within the coefficient data storage device.
  • adjacent summed-product pairs are added in response to executing an adjacent-add instruction.
  • the adjacent summed-product pairs are added within the destination data storage device to form one or more data processing operation results. Once the one or more data processing operation results are formed, the results are stored within a memory device.
  • Lachine; Vladimir U. S 20060050083 discloses a method and system for circularly symmetric anisotropic filtering over an extended elliptical or rectangular footprint in single-pass digital image warping are disclosed.
  • the filtering is performed by first finding and adjusting an ellipse that approximates a non-uniform image scaling function in a mapped position of an output pixel in the input image space.
  • a linear transformation from this ellipse to a unit circle in the output image space is determined to calculate input pixel radii inside the footprint and corresponding filter coefficient as a function of the radius.
  • the shape of the footprint is determined as a trade-off between image quality and processing speed.
  • profiles of smoothing and warping components are combined to produce sharper or detail enhanced output image.
  • the method and system of the invention produce natural output image without jagging artifacts, while maintaining or enhancing the sharpness of the input image.
  • Maclnnis; Alexander G. U. S 20040181564 discloses system and method of data unit management in a decoding system employing a decoding pipeline. Each incoming data unit is assigned a memory element and is stored in the assigned memory
  • Each decoding module gets the data to be operated on, as well as the control data, for a given data unit from the assigned memory element.
  • Each decoding module after performing its decoding operations on the data unit, deposits the newly processed data back into the same memory element.
  • the assigned memory locations comprise a header portion for holding the control data corresponding to the data unit and a data portion for holding the substantive data of the data unit.
  • the header information is written to the header portion of the assigned memory element once and accessed by the various decoding modules throughout the decoding pipeline as needed.
  • the data portion of memory is used/shared by multiple decoding modules.
  • Yu; Dahai; U.S. 7,120,286 discloses a method and apparatus for tracing an edge contour of an object in three dimensional space is provided.
  • the method and apparatus is utilized in a computer vision system that is designed to obtain precise dimensional measurements of a scanned object.
  • multiple images may be collected and saved for a number of Z heights for a particular position of the XY stage. These saved images can later be used to calculate a focal position for each edge point trial location in the selected XY area rather than requiring a physical Z stage movement.
  • a Z height extrapolation based on the Z heights of previous edge points can significantly speed up the searching process, particularly for objects where the Z height change of a contour is gradual and predictable.
  • a filter that includes an analyzer, thresholding circuit, and synthesizer.
  • the analyzer generates a low- frequency component signal and a high-frequency component signal from an input signal.
  • the thresholding circuit generates a processed high-frequency signal from the high-frequency component signal, the processed high-frequency signal having an amplitude of zero in those regions in which the high-frequency component signal has an amplitude that is less than a threshold value.
  • the synthesizer generates a filtered signal from input signals that include the low-frequency component signal and the processed high-frequency signal. The filtered signal is identical to the input signal if the threshold value is zero.
  • the analyzer is preferably constructed from a plurality of
  • Kawano; Tsutomu; U.S. 20030095698 discloses a feature extracting method for a radiation image formed by radiation image signals each corresponding to an amount of radiation having passed through a radiographed subject, has plural different feature extracting steps, each of the plural different feature extracting steps having a respective feature extracting condition to extract a respective feature value; a feature value evaluating step of evaluating a combination of the plural different feature values; and a controlling step of selecting at least one feature extracting step from the plural different feature extracting steps based on an evaluation result by the feature value evaluating step, changing the feature extracting condition of the selected feature extracting step and conducting the selected feature extracting step so as to extract a feature value again based on the changed feature extracting condition from the radiation image.
  • U.S. 20030052886 discloses a video routing system including a plurality of video routers VR(O), VR(I), . . . , VR(N.sub.R-l) coupled in a linear series. Each video router in the linear series may successively operate on a digital video stream. Each video router provides a synchronous clock along with its output video stream so a link interface buffer in the next video router can capture values from the output video stream in response to the synchronous clock. A common clock signal is distributed to each of the video routers. Each video router buffers the common clock signal to generate an output clock. The output clock is used as a read clock to read data out of the corresponding link interface buffer. The output clock is also used to generate the synchronous clock that is transmitted downstream.
  • the image processing system comprises: a device profile storage section which stores ideal- environment-measurement data; a light separating section which derives output light data indicating output light from an image projecting section and ambient light data based on a difference between first and second viewing-environment-measurement
  • a projection-plane-reflectance estimating section which estimates a reflectance of a projection plane, based on the output light data and the ideal environment-measurement data
  • a sensing data generating section which generates viewing-environment-estimation data based on the reflectance, the ideal-environment-measurement data and the ambient light data
  • an LUT generating section which updates an LUT based on the viewing-environment-estimation data
  • a correcting section which corrects image information based on the updated LUT.
  • 7,205,52OA discloses a ground based launch detection system consisting of a sensor grid of electro-optical sensors for detecting the launch of a threat missile which targets commercial aircraft in proximity to a commercial airport or airfield.
  • the electro-optical sensors are configured in a wireless network which broadcast threat lines to neighboring sensors with overlapping field of views.
  • threat data is sent to a centrally located processing facility which determines which aircraft in the vicinity are targets and send a dispense countermeasure signal to the aircraft.
  • Nefian; Ara V. U.S. 20040071338 discloses an image processing system useful for facial recognition and security identification obtains an array of observation vectors from a facial image to be identified.
  • a Viterbi algorithm is applied to the observation vectors given the parameters of a hierarchical statistical model for each object, and a face is identified by finding a highest matching score between an observation sequence and the hierarchical statistical model.
  • Maclnnis; Alexander G. U.S. 20030187824 discloses a system and method of data unit management in a decoding system employing a decoding pipeline.
  • Each incoming data unit is assigned a memory element and is stored in the assigned memory element.
  • Each decoding module gets the data to be operated on, as well as the control data, for a given data unit from the assigned memory element.
  • Each decoding module after performing its decoding operations on the data unit, deposits the newly processed data back into the same memory element.
  • the assigned memory locations comprise a header portion for holding the control data corresponding to the data unit and a data portion for holding the substantive
  • the header information is written to the header portion of the assigned memory element once and accessed by the various decoding modules throughout the decoding pipeline as needed.
  • the data portion of memory is used/shared by multiple decoding modules.
  • An apparatus consisting of hardware and software for converting input signals from a video camera or sensors into a numerical data in real time, and to minimize time latencies.
  • the derived data provides identification of object, directional as well as rotational parameters of moving objects, in a six degree of freedom.
  • the apparatus for converts input signals from a video camera or sensors into a numerical data in real time, to detect, identify, and track dynamic moving objects in 3D space.
  • the data will also provide for 3D locations coordinates of each target, and track 3D motion vectors of each individual target.
  • the hardware and software architecture is intended to eliminate time latencies between detection, tracking and reporting of moving multiple targets, moving in a six degree of freedom.
  • the apparatus utilizes efficient video filtering hardware that identifies individual prime colors of electromagnetic waves, with resolution of the least significant bit of the analog to digital (AfD) converter.
  • the filter also has the capability to filter out unwanted colors including background colors and substituting them with any desired color.
  • the major difference between this invention and other digital image processing systems is it capability to filter video spectrum pixel colors electronically.
  • the resolution of spectrum color filtering or the number of individual colors to be distinguished and filtered is D to the power of 3, where D is the number of bits in the analog to digital converter used. For instance for a 10 bit AfD converter, it distinguishes (1000* 1000*1000) one billion individual colors within the color spectrum. This is a very powerful tool in digital image processing. It eliminates the need for time consuming Fourier Transforms used in almost all image processor
  • the detection time for each one of the prime color pixels are the throughput delays or access time of electronic memory, which are usually in the order of tens of nanoseconds.
  • Another major advantage of this invention is that does not require computation intensive Fourier Transforms.
  • the architecture is intended to minimize the detection time of multiple moving targets in real time interactive scenario.
  • the architecture is that of a distributed processing, acting similar to an assembly line processor (similar to of a car manufacturing assembly lines) in which processors will work in conjunction with First in First out Memories in between two processors.
  • the apparatus utilizes special distributed computer hardware that resembles that of a typical assembly line activities.
  • FIFO's are utilized to carry semi-processed data from one processor to another.
  • the FIFO's are also used in a unique manner in which identification of the objects are made much easier.
  • the activity of each individual processor is made simple enough, such that a simple state machine hardware implementation would save time.
  • the processor's individual tasks within the processing line provides a means to eliminate processing bottlenecks that are common in most of the computer architecture.
  • Another characteristic of the invention are its capability to measure X, Y, Z distances as well as rotational vector parameters of moving objects. Another characteristic of the invention are its capability to measure rotational parameters of moving objects as well as distance measurements. Another advantage of this invention is its usage of memory for variety of image processing tasks, and avoiding elaborate software programs.
  • Another advantage of this invention is the use of unique distributed computer architecture similar to car manufacturing assembly lines wherein each computer performs simple image processing tasks by receiving semi processed data from a FIFO, and writing semi processed data to a next FIFO in line.
  • each computer performs simple image processing tasks by receiving semi processed data from a FIFO, and writing semi processed data to a next FIFO in line.
  • Figure 1 is the block diagram of the color filter and identifier. It receives video analog signals from a camera or a sensor. It shows two stages in which colors are filtered and identified. The first stage is for identification of prime colors and the second stage is for identification of colors in color spectrum. Number identifies prime colors and spectrum colors. As seen from the block diagram, the output is spectrum color numbers and spectrum color group number.
  • Figure 2 is the Video Synchronization and Control Logic Block Diagram.
  • Figure 2A is the hardware method in which a gap is detected to distinguish one object from another.
  • Figure 3 is block diagram of Real Time Distributed Processor (assembly Line Processor). It receives video data information from the color Filter and Identification block diagram Figure IA. It provides Object Identification and motion tracking data of distances and rotational motions of moving objects.
  • Figure 4 is drawing of a multicolored cube wherein its midpoints in X, and Y coordinate is shown.
  • Figure 4A is drawing of a multicolored cube of Figure 4 is shown wherein its motion in z axis, and its area of each color covered in X, and Y coordinate is shown.
  • Figure 5 is drawing of a multicolored half globe wherein its midpoints in X, and Y coordinate is shown.
  • FIG 6 is the flowchart diagram of one of the Pixel Group Identification Processor part of the distributed processor shown in Figure 2.
  • Processor function is to provide reference x midpoint coordinate of objects necessary for next stage processor, Midpoint x, and y Coordinate Processor.
  • Figure 6A is a presentation of midpoint x reference data generated by figure 6.
  • the presentation is for understanding of the figure 6-midpoint reference activities. It shows identified pixels of a group of objects in rows and the definition of Gap.
  • Figure 7 is the x (row) and y (column) Processor flow chart diagram. Its main function is to sort reference midpoint coordinate of objects based upon the X, and Y,
  • Figure 7A shows the input to the Midpoint x, and y (column) Coordinate Processor for two consecutive frames.
  • Figure 7B shows output of the Midpoint x, and y (column) Coordinate Processor for two consecutive frames.
  • Figure 7C shows the detailed operation of the Midpoint x, and y (column)
  • FIG. 8 is the Object Identification Processor flow diagram that identifies objects based upon he emergence on the screen and disappearance.
  • Figure 8 A is the input to the Object Identification Processor for two consecutive frames.
  • Figure 9 is the block diagram of the Motion vector Measurement hardware that provides distances as ell as rotational parameter of moving objects.
  • An apparatus consisting of hardware and software for converting input signals from a video camera or sensors into a numerical data representing motion characteristics of multiple moving targets, with minimal latencies.
  • the data provides identification of objects, distances (X, Y, Z) as well as rotational parameters of moving objects, in a six degree of freedom.
  • the apparatus consists of an efficient video filtering technique that identifies each individual prime colors of electromagnetic waves and color spectrum with the resolution of the relevant AJD converter to the power of three. .
  • the filter has the capability to filter out unwanted colors including background colors and substituting any desired color for transmission.
  • the apparatus In order to meet stringent latency time requirement of real time motion detection, the apparatus consists of a special distributed processing computer hardware that resembles a typical assembly line activities. FIFO's are utilized to carry semi- processed data from one processor to another. The FIFO's are also used in a unique manner in which identification of the objects are made much easier. The activity of each individual processor is made simple enough, such that a state machine controller/processor hardware implementation would replaces typical CPU's. The individual processor's tasks in conjunction with use of FIFO's, provides a means to eliminate bottlenecks that are common in most of the distributed processor computer architectures.
  • the digital prime color intensities are set as an address to an appropriate prime color memory.
  • the memory contains the prime color filtering and bandwidth information for each prime color, which has been pre- recorded by the CPU.
  • the pre-recorded data of the memory is organized to identify prime color numbers, and prime color's group number.
  • the pre-recorded data of the memory also identifies the particular group of any other prime colors.
  • the groupings can be from 1 to m, where m is the total number of groups of colors of different objects.
  • the memory will indicate if that prime color is to be replaced with another prime color, and provides the desired intensities to be replaced with the detected intensity. Therefore the content of memory can contain pre-recorded information such as:
  • the prime color numbers of all prime colors (10), and their associated group numbers (11) are set to a color spectrum memory to identify color numbers within the color spectrum.
  • the prime color umbers from all three of prime color memories are set as an address to a Color Spectrum Memory, wherein the data of the memory, indicates identification, selected color number, selected color group,
  • center of the filter bandwidth of each color, if it is greater than, less than, or equal to a center of the color within a group of colors in the color spectrum;
  • the number of locations of the address in which the color is to be filtered decides a bandwidth of a color and its group identification.
  • the Color Spectrum Memory Filter also contains substitution of any incoming color with another color to be transmitted.
  • Identification is made by reading a "0" or a "1" from the data of the memory.
  • A"0" represents the prime color is not identified and "1" represent the prime colors intensity is identified.
  • the memory also contains prerecorded number associated with that particular prime color intensity.
  • Identified prime colors point 10 are numbered from 1 to n, where n is total filtered prime color number.
  • Figure 2 is the expansion of the block 7 in Figure 1. It includes the video frame header detection 61, frame's row and column counters 62, sub pixel timing counter 63 that are input to the frame reference ROM 65 to provide pixel prime color designation timing to filter apparatus 20 and other logical controls.
  • FIG 3 is the architectural block diagram of a distributed processor for time critical digital image processing. Since the architecture of this distributed processor, resembles that of a typical assembly line, it is called, a Distributed Assembly Line Processor.
  • the post-processor of each FIFO read the semi-processed data from that FIFO and after further processing write it into the next stage FIFO.
  • the Pixel Processor interfaces with Color Spectrum Filter and both Video Data FIFO A, and Video Data FIFO B. Filtered and identified pixels, are red from the Spectrum Filter memory and then loaded to one of the FIFO's. It also interfaces with Video Synchronization and Control Logic to read relevant frame timing to write it to the Video Data FIFO's. It is also interfaced to Gap signal ( Figure 2A) to receive a Gap signal from the Gap Detector Hardware to amends a gap mark and announce the end of detection of group of colored pixels within a row.
  • the Assembly Line Processor's individual processors will process the pixels based upon their color and group identifications, and then start processing and identification of colors and objects based upon their x, and y frame location coordinates in which they were found.
  • the order of coordinates of each pixel are characterized by column first and row second.
  • the definition of tasks and functions of each processor and FIFO will become clear in the following sections.
  • Utilization of FIFO's provide the advantage for the each processor to read and write data in only two addresses, thus saving time in updating pointers for data read and data write. Since the functions of each processor is kept to a minimum, a memory based state machine logic that changes modes of operation within one clock period time, compared with memory based CPU's, that take many clocks to complete an instruction set.
  • an object is considered to be separated from another object, if there is a "n" number of consecutive undetected pixels of a color (s) in a row, and "m" consecutive columns of undetected (same color'(s)) number, in between colored objects.
  • this separation a gap.
  • the gap is absent of a specified color pixel in a row and columns from another specified color pixel or the same color pixel in the same row.
  • a separation and identification of two objects are declared. • Two dimensional detection of object moving in a three dimensional space are assumed to be in the vicinity of the same location initially detected for a given frame rate;
  • the shift register is loaded with "n” and it loads the shift register whenever an identification of signal is received from Spectrum Color Memory. As long as there are consecutive detected pixels in a row, the gap detect signal will remain low, but when the "n" number of undetected signal reaches, this signal will go High indicating a separation two objects.
  • the x, and y midpoint position of an object moving in a three-dimensional space is its two-dimensional focal plane midpoint "x" (row), and its midpoint "y” (column) captured by sensors of a camera.
  • the midpoint X coordinate of a multicolor device is the midpoint between the smallest ( Figure 4A point 202) and largest pixel x coordinates of any one of its colors detected in any row (Fig 4A point 203).
  • the midpoint Y coordinates of a multicolor object is the midpoint row, between the first to the last row in which any one of its colors is detected ( Figure 4 A, points 204, and 205). Referring now to diagram of Figure 4, we find the approximate midpoint coordinate of a multicolored cube, is the point where two lines (200), and (201) intersect each other Point 209.
  • Figure 4A is another drawing of Figure 4, wherein the distances as well as angles are changed from frame to frame, compared to Figure 4.
  • Point (202) is the minimum x, (the smallest x coordinate pixel in which the object was detected) and (203) is the maximum x coordinates.
  • Point (204) is the first row in which the cube has been detected (minimum y), and point (205) is the detection of the objects is ended (maximum y).
  • Figure 5 is a rendition of a half globe, wherein the midpoint coordinates are identified.
  • Points (210, 211, 213, and 214) are the area of each color is detected in an X, and Y plane. The area under each color is the total pixel count of that color.
  • Points (210, 211, 213, and 214) are the derived by counting the same colored pixels detected in one particular frame.
  • the Filter Processor coupled to the Color Spectrum Filter, reads pixel information from the color spectrum memory whenever the "pixel detect mark” appears at the output of the spectrum color filter (at appropriate pixel timing), to denote the detection of a pixel color during that pixel timing and provides the following to Video Data FIFO: a) The filtered color pixel data. • The multicolored object's spectrum identity number.
  • the pixel Group Identifier Processor receives filtered color pixels, and related group number from Video Data FIFO.
  • Figure 6 represents the flow chart activities of the Pixel Group Identifier Processor. Its job function is to identify color, and groups of colors belonging to an object within a row. It then provides the midpoint reference location of a group of colors in which they were found in a row of a frame.
  • Figures 6A
  • This reference midpoint x is only for location identification of a group of color pixels in a row that have the same color and belong to a group of colors. Actual midpoint x identification takes place in the next stage of processing.
  • point 103 it adds the number of detected pixels (in a row) belonging to the same color of the group.
  • the area under each color of an object is needed to detect it rotational vectors.
  • the total area under all colors of an object represents it closeness to the detector. This is explained in Motion vector Measurement memory to follow.
  • the algorithm also checks for and retains the smallest and the largest x coordinates (of any color in a group in a row). This measurement is later used to find the midpoint coordinate of an object in following columns.
  • it checks to make sure that the detected color belongs to the group of colors associated with an object. This is a double check, in addition to the filter group checking and identification of colors in Figure 1. If it is the color of the same group, it goes back to point (104) to get the next pixel, and group's color number.
  • Point (108) is reached when different group of colors are detected. It does the following:
  • Retains the smallest and the largest x coordinates of a group colors in a row.
  • the new detected pixel color is different (does not belong to the same group), it is assumed that a color in a different group of colors is detected (This is the same as detection of a different object).
  • it checks for a gap tag that was amended by figure 3. If there is gap, it assumes correct spacing, if there is no gap found, it provides an error signal.
  • FIG. 6A illustrates the concept of group of colors that appear on the CCD, and the concept of Gap between two objects.
  • the method in which the Pixel Group Processor reads data from the FIFO is in a manner in which a pixel is detected in any row to the end of a row and then from the next.
  • the output of the Pixel Group Identifier Processor is illustrated in Figure 7A for to consecutive frames.
  • the X (row) and Y (column) Coordinate Processor reads the reference midpoint coordinates from the Group Identifier FIFO, and sorts them with respect to their relative location coordinates.
  • point 131 it looks for the first midpoint entry and keeps it to check other entries that are closely related to the first coordinate reading.
  • it reads the next entry, and in order to check its position with the first entry, it extends the search range of second reading by few +/- n pixels.
  • point 133 it starts from the lowest extended number and checks it against the first entry it received in point 131. If the midpoints are close to each other within +/- n pixel locations, and close to each other within "m" number of columns (point 134), it transitions to point 135.
  • the Pixel Identifier processor Since the order of the received midpoint reference coordinate is that the Pixel Identifier processor starts from the first to the end of the row looking for the reference midpoint and repeats it again for the remaining of the rows, there is a correlation between the data and the object within the same frame. For continence, a number is assigned for each group of midpoint x. The numbers are based upon the first group and the last group of the x midpoints figures 4, 4 A, and 5.
  • point (136) it checks the end of the list and if it is the end, it changes the order of net FIFO's and goes back to point (130), for the start of the next frame.
  • AT point (138) if at the end of the range of +/-n, there is no match between the two midpoint reference x's coordinate readings; it indicates that, the second reading belongs to different object and transitions to point (139).
  • Point (139) it assumes that the midpoint coordinate x identification has ended. It then calculates the real midpoint X coordinate and midpoint Y coordinate of the object and it sends the result to next stage FIFO. It also marks the second reading as the first and it transitions to point (132), to look for match of the second object.
  • the X, and Y coordinate processor reduces the amount of data in between rows belonging to a group of colors (object).
  • Figure 7 A is the illustration of the pixel grouping within a row followed by the next row (next Column) for two frames. It indicates the emergence of a new and disappearance of object that are input to the X, and Y Coordinate Processor.
  • Figure 7B is the presentation of the result of the processing by the X, and Y Coordinate Processor in which each object is represented by a point that is the midpoint x, and midpoint y of an object with a frame reference of coordinates.
  • the Object Identification Processor reads X, and Y midpoint coordinates information from the X, and Y Coordinate FIFO. It essentially compares the coordinates from new FIFO (new frame) to the old FIFO (old frame) and makes a decision if the new coordinates in the new frame, is equal to the old frame, smaller
  • the processor starts with the new Y coordinates, developed in the previous process, and after extending the search range of the new row coordinate by +/-m, it starts comparison of the rows.
  • Search range +/-m is to make sure that small motion changes from one frame to another frame are accounted for.
  • point (152) if the comparison is not made, it will increase the range of search by one, and transition to point 154 wherein if it is not the end of y coordinate search range; it will go back to point 152.
  • Multicolored three dimensional objects moving in a three dimensional space will provide an instantaneous vector measurement of distances x, y, and z as well as rotational values related to motions of the object in a six degree of freedom as follows: a) Multicolored objects moving in a three dimensional space, when detected by a camera, will register a unique signature of different color areas in each frame, wherein, the areas under each color of all the colors in a group of predefined colors, will represent a unique instantaneous angles of rotation in a three dimensional space and the total areas of different colors, provide an instantaneous magnitude of distance in z direction. b) The relative value of the three rotational positions of an object in motion is obtained by setting the area for each color of related colors and color numbers, as an address to a memory wherein the data for the three angular positions have already been registered during calibration.
  • Measurements of instantaneous angular positions and instantaneous location in z direction in any set of multicolor object are the result of comparison of instantaneous measurements to the empirical measurements performed during calibration.
  • the surface area of each color is measured by counting the number of pixels of that particular color in a set of colors detected in a frame real time motion detections.
  • Calibration of values of motion in z direction and the three instantaneous angular motion values are the result of empirical measurement of the area under the each color and recording known vector motion values in a memory addressed by each color number and detected area of each relevant colors.
  • the Rotational Motion Detector Memory (41) receives each group's individual total number of pixels of each color of an object (43, 45, 46) along with the associated group number from the Object Identification Processor. It sets this information along with spectrum identification number as an address to the Rotational Vector Motion Measurement Memory and receives three dimensional rotational values (47), as well as motion in Z direction
  • Figure (4, 4A, and 5) show the motions of a multicolored cube and a half globe, moving in a three dimensional space.
  • the motion change detector processor receives the rotational x, y, and z vectors and Z coordinates values from the Vector Measurement Ram. It also calculates their associated elapsed time from this frame to previous frame of each object. It calculates the velocities and acceleration of each object from this measurement to previous frame measurement. Elapsed time is calculated a follows:
  • Elapsed time between the detections in two frames is the midpoint in time of this frame detection to the midpoint in time of the previous frame detection.
  • Midpoint time is the time difference between the first row to the last row in a frame in which the detection took place divided by two.
  • the vector velocities are derived by the changes in vector measurements during two frames divided by the elapsed time. This information is passed to the Motion Track FIFO.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un appareil pour le filtrage de pixels couleur numériques vidéo et le traitement d'image numérique qui élimine le besoin de transformées de Fourier, éliminant ainsi une multiplication et des additions consommatrices de temps. Il utilise une nouvelle architecture informatique distribuée qui fonctionne conjointement avec des mémoires premier entré, premier sorti, à l'aide d'un logiciel simple pour chaque processeur destiné à minimiser les problèmes de latence du traitement d'image numérique interactif en temps réel. L'architecture de traitement distribuée est configurée pour fonctionner de manière similaire à des lignes d'assemblage d'usine, les mémoires FIFO transportant des données semi-traitées d'un processeur à l'autre. Un système à mémoire unique est utilisé pour mesurer des vecteurs de mouvement comprenant des vecteurs de distance et des vecteurs rotatifs, d'objets mobiles se déplaçant sur six degrés de liberté.
EP08815721.9A 2008-09-08 2008-09-08 Filtre vidéo numérique et traitement d'image Withdrawn EP2321819A4 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2008/010484 WO2010027348A1 (fr) 2008-09-08 2008-09-08 Filtre vidéo numérique et traitement d'image

Publications (2)

Publication Number Publication Date
EP2321819A1 true EP2321819A1 (fr) 2011-05-18
EP2321819A4 EP2321819A4 (fr) 2014-03-12

Family

ID=41797344

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08815721.9A Withdrawn EP2321819A4 (fr) 2008-09-08 2008-09-08 Filtre vidéo numérique et traitement d'image

Country Status (8)

Country Link
EP (1) EP2321819A4 (fr)
JP (1) JP2012502275A (fr)
KR (1) KR20110053417A (fr)
CN (1) CN102077268A (fr)
CA (1) CA2725377A1 (fr)
GB (1) GB2475432B (fr)
IL (1) IL211130A0 (fr)
WO (1) WO2010027348A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8872969B1 (en) 2013-09-03 2014-10-28 Nvidia Corporation Dynamic relative adjustment of a color parameter of at least a portion of a video frame/image and/or a color parameter of at least a portion of a subtitle associated therewith prior to rendering thereof on a display unit

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537974B (zh) * 2015-01-04 2017-04-05 京东方科技集团股份有限公司 数据获取子模块及方法、数据处理单元、系统和显示装置
WO2016178643A1 (fr) * 2015-05-06 2016-11-10 Erlab Teknoloji Anonim Sirketi Procédé destiné à l'analyse des données d'une séquence nucléotidique par utilisation conjointe de multiples unités de calcul à différents emplacements
JP7303793B2 (ja) * 2017-08-07 2023-07-05 ザ ジャクソン ラボラトリー 長期間の継続的な動物行動モニタリング
CN111384963B (zh) * 2018-12-28 2022-07-12 上海寒武纪信息科技有限公司 数据压缩解压装置和数据解压方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003041393A2 (fr) * 2001-11-09 2003-05-15 Creative Frontier Inc. Systeme video interactif en temps reel
US20040071352A1 (en) * 2002-07-02 2004-04-15 Canon Kabushiki Kaisha Image area extraction method, image reconstruction method using the extraction result and apparatus thereof
US20040130546A1 (en) * 2003-01-06 2004-07-08 Porikli Fatih M. Region growing with adaptive thresholds and distance function parameters
US20040246336A1 (en) * 2003-06-04 2004-12-09 Model Software Corporation Video surveillance system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2845473B2 (ja) * 1989-02-13 1999-01-13 繁 安藤 動画像の運動、非運動領域検出装置
US6177922B1 (en) * 1997-04-15 2001-01-23 Genesis Microship, Inc. Multi-scan video timing generator for format conversion
JP4197844B2 (ja) * 1998-09-24 2008-12-17 キネティック リミテッド パターン認識に関する改良
EP1360833A1 (fr) * 2000-08-31 2003-11-12 Rytec Corporation Capteur et systeme d'imagerie
US6831653B2 (en) * 2001-07-31 2004-12-14 Sun Microsystems, Inc. Graphics pixel packing for improved fill rate performance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003041393A2 (fr) * 2001-11-09 2003-05-15 Creative Frontier Inc. Systeme video interactif en temps reel
US20040071352A1 (en) * 2002-07-02 2004-04-15 Canon Kabushiki Kaisha Image area extraction method, image reconstruction method using the extraction result and apparatus thereof
US20040130546A1 (en) * 2003-01-06 2004-07-08 Porikli Fatih M. Region growing with adaptive thresholds and distance function parameters
US20040246336A1 (en) * 2003-06-04 2004-12-09 Model Software Corporation Video surveillance system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2010027348A1 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8872969B1 (en) 2013-09-03 2014-10-28 Nvidia Corporation Dynamic relative adjustment of a color parameter of at least a portion of a video frame/image and/or a color parameter of at least a portion of a subtitle associated therewith prior to rendering thereof on a display unit

Also Published As

Publication number Publication date
IL211130A0 (en) 2011-04-28
GB201021723D0 (en) 2011-02-02
CA2725377A1 (fr) 2010-03-11
CN102077268A (zh) 2011-05-25
EP2321819A4 (fr) 2014-03-12
WO2010027348A1 (fr) 2010-03-11
KR20110053417A (ko) 2011-05-23
GB2475432B (en) 2013-01-23
GB2475432A (en) 2011-05-18
JP2012502275A (ja) 2012-01-26

Similar Documents

Publication Publication Date Title
US7529404B2 (en) Digital video filter and image processing
US7925051B2 (en) Method for capturing images comprising a measurement of local motions
JP6493163B2 (ja) 粗密探索方法および画像処理装置
JP4613994B2 (ja) 動態推定装置、動態推定方法、プログラム
US10621446B2 (en) Handling perspective magnification in optical flow processing
Ishii et al. Development of high-speed and real-time vision platform, H 3 Vision
CN106155299B (zh) 一种对智能设备进行手势控制的方法及装置
EP2321819A1 (fr) Filtre vidéo numérique et traitement d'image
CN110637461A (zh) 计算机视觉系统中的致密光学流处理
CN116342894B (zh) 基于改进YOLOv5的GIS红外特征识别系统及方法
CN101572770B (zh) 一种可用于实时监控的运动检测方法与装置
CN1130077C (zh) 利用梯度模式匹配的运动补偿装置和方法
Cambuim et al. Hardware module for low-resource and real-time stereo vision engine using semi-global matching approach
CN110651475B (zh) 用于致密光学流的阶层式数据组织
CN117870659A (zh) 基于点线特征的视觉惯性组合导航算法
JP2001338280A (ja) 3次元空間情報入力装置
RU2767281C1 (ru) Способ интеллектуальной обработки массива неоднородных изображений
CN115190303A (zh) 一种云端桌面图像处理方法、系统和相关设备
JP2001167283A (ja) 顔面運動解析装置および顔面運動解析のためのプログラムを記憶した記憶媒体
Nover et al. ESPReSSo: efficient slanted PatchMatch for real-time spacetime stereo
Yang et al. A general line tracking algorithm based on computer vision
JP3016687B2 (ja) 画像処理装置
CN109788219B (zh) 一种用于人眼视线追踪的高速cmos图像传感器读出方法
WO2020196917A1 (fr) Dispositif de reconnaissance d'image et procédé de reconnaissance d'image
JP2709301B2 (ja) 線条光抽出回路

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20101221

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20140207

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/00 20060101AFI20140203BHEP

Ipc: G06T 7/40 20060101ALI20140203BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20140401