US8526716B2 - Analysis of stereoscopic images - Google Patents

Analysis of stereoscopic images Download PDF

Info

Publication number
US8526716B2
US8526716B2 US13/415,962 US201213415962A US8526716B2 US 8526716 B2 US8526716 B2 US 8526716B2 US 201213415962 A US201213415962 A US 201213415962A US 8526716 B2 US8526716 B2 US 8526716B2
Authority
US
United States
Prior art keywords
pair
disparity
images
image
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US13/415,962
Other versions
US20120230580A1 (en
Inventor
Michael James Knee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Grass Valley Ltd
Original Assignee
Snell Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Snell Ltd filed Critical Snell Ltd
Assigned to SNELL LIMITED reassignment SNELL LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KNEE, MICHAEL JAMES
Publication of US20120230580A1 publication Critical patent/US20120230580A1/en
Application granted granted Critical
Publication of US8526716B2 publication Critical patent/US8526716B2/en
Assigned to Snell Advanced Media Limited reassignment Snell Advanced Media Limited CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SNELL LIMITED
Assigned to GRASS VALLEY LIMITED reassignment GRASS VALLEY LIMITED CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: Snell Advanced Media Limited
Assigned to MGG INVESTMENT GROUP LP, AS COLLATERAL AGENT reassignment MGG INVESTMENT GROUP LP, AS COLLATERAL AGENT GRANT OF SECURITY INTEREST - PATENTS Assignors: GRASS VALLEY CANADA, GRASS VALLEY LIMITED, Grass Valley USA, LLC
Assigned to Grass Valley USA, LLC, GRASS VALLEY LIMITED, GRASS VALLEY CANADA reassignment Grass Valley USA, LLC TERMINATION AND RELEASE OF PATENT SECURITY AGREEMENT Assignors: MGG INVESTMENT GROUP LP
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • This invention concerns the analysis of stereoscopic images and in one example to the detection and correction of errors in stereoscopic images. It may be applied to stereoscopic motion-images.
  • ‘three-dimensional’ images by arranging for the viewer's left and right eyes to see different images of the same scene is well known.
  • Such images are typically created by a ‘stereoscopic’ camera that comprises two cameras that view the scene from respective viewpoints that are horizontally spaced apart by a distance similar to that between the left and right eyes of a viewer.
  • stereoscopic image sequences Many ways of distributing stereoscopic image sequences have been proposed, one example is the use of separate image data streams or physical transport media for the left-eye and right-eye images. Another example is the ‘side-by-side’ representation of left-eye and right-eye images in a frame or raster originally intended for a single image. Other methods include dividing the pixels of an image into two, interleaved groups and allocating one group to the left-eye image and the other group to the right-eye image, for example alternate lines of pixels can be used for the two images.
  • the multiplicity of transmission formats for stereoscopic images leads to a significant probability of inadvertent transposition of the left and right images.
  • the wholly unacceptable viewing experience that results from transposition gives rise to a need for a method of detecting, for a given ‘stereo-pair’ of images, which is the left-eye image, and which is the right-eye image.
  • the term ‘stereo polarity’ will be used to denote the allocation of a stereo pair of images to the two image paths of a stereoscopic image processing or display system. If the stereo polarity is correct then the viewer's left and right eyes will be presented with the correct images for a valid illusion of depth.
  • depth is represented by the difference in horizontal position—the horizontal disparity—between the representations of a particular object in the two images of the pair.
  • Objects intended to appear in the plane of the display device have no disparity; objects behind the display plane are moved to the left in the left image, and moved to the right in the right image; and, objects in front of the display plane are moved to the right in the left image, and moved to the left in the right image.
  • the present invention consists in a method of processing in an image processor a pair of images intended for stereoscopic presentation to identify the left-eye and right-eye images of the pair, comprising the steps of dividing both image into a plurality of like image regions; determining for each region a disparity value between the images of the pair to produce a set of disparity values for the image pair; deriving for each region a confidence factor for the disparity value; determining a correlation parameter between the set of disparity values for the pair of images and a corresponding set of disparity values from a disparity model, in which the contribution of the disparity value for a region to the said correlation parameter is weighted in dependence on a confidence factor for that region; and identifying from said correlation parameter the left-eye and right-eye images of the pair.
  • the present invention consist in a method of processing images intended for stereoscopic presentation, comprising the steps of determining the spatial variation of disparity between images of a stereoscopic pair, comparing the determined spatial variation of disparity with a disparity model and identifying therefrom the left-eye and right-eye images of the pair; in which the disparity model represents objects at the sides of the frame as closer to the camera than objects nearer the horizontal centre of the frame.
  • FIG. 1 shows a block diagram of an embodiment of the invention.
  • FIG. 2 shows the allocation of pixels of a horizontally subsampled image to horizontally overlapping blocks.
  • FIG. 3 shows the allocation of blocks to regions.
  • a method of identifying the stereo polarity of a pair of images intended for stereoscopic presentation will now be described.
  • the method is based on evaluating the correlation between: the measured spatial distribution of horizontal disparity between the pair of images; and an expected model of that spatial distribution.
  • FIG. 1 A block diagram of one possible implementation of this method is shown in FIG. 1 .
  • Pixel values for the respective images of a stereo pair of images are input at terminal ( 401 ) for image A and terminal ( 402 ) for image B. Typically these values will be luminance values of pixels.
  • the two sets of pixel values are optionally filtered and down-sampled at ( 403 ) for image A and ( 404 ) for image B.
  • horizontal down-sampling by a factor of two is used, and input images having 720 pixels per line are down-sampled in the well-known manner so that they are represented by 360 pixels per line.
  • references to pixels correspond to these sub-sampled pixels.
  • the two images are then each identically divided, at ( 405 ) and ( 406 ), into horizontally overlapping blocks of pixels.
  • a suitable block structure is shown in FIG. 2 ; it comprises a rectangular array, nine blocks high by 10 blocks wide.
  • the input image has 576 lines and so each block is 64 lines high.
  • the line numbers at the top and bottom of each block are shown in the Figure at ( 51 ); the top row of blocks includes parts of lines 1 to 64 and the bottom row includes parts of lines 513 to 576 .
  • the blocks are 64 pixels wide, and overlap each other by 50%, and so the total width of the ten blocks is 352 pixels.
  • the vertical grid of FIG. 2 corresponds to the block edges; the horizontal centre of each block corresponds with opposite edges of its neighbours.
  • the positions of the horizontal centres of each column of blocks are shown by the numerals ( 52 ).
  • a horizontal count of pixels is shown at ( 53 ); the first column of blocks includes pixels 5 to 68 of the relevant lines, and the last column includes pixels 293 to 356 .
  • the total width of all ten columns is eight pixels less than the 360 pixels of each down-sampled line; four pixels at the start of each line, and four pixels at the end of each line, are therefore discarded.
  • each block is vertically averaged to obtain a ‘one-dimensional block’ of 64 average pixel values, where each average value is the average of one of the 64 vertical columns of 64 pixels that comprise the block.
  • the blocks representing image A are averaged at ( 407 ), and the blocks of image B are averaged at ( 408 ).
  • the vertical averaging may expressed as follows:
  • (i,j) are the Cartesian coordinates of the pixels of the block, with (1,1) at the top left corner of the block, and (64,64) at the bottom right corner of the block.
  • Respective pairs of co-located one-dimensional blocks from input image A and input image B are compared in a horizontal disparity estimation process ( 409 ).
  • the result is a measure of the horizontal displacement of the picture content of image A with respect to the content of image B at each block position.
  • the horizontal disparity estimation process ( 409 ) can use any of the known methods of image motion estimation, for example phase-correlation or block matching. Because the process is one-dimensional, the required processing resource is modest.
  • the disparity estimation process finds the shifted positions of one-dimensional blocks from image A that best match the corresponding one-dimensional blocks from image B.
  • the disparity output for a block is the signed magnitude of the horizontal shift in units of the (horizontally sub-sampled) pixel pitch. The sign is positive when image A has to be shifted to the right in order to obtain a match; and, the sign is negative when image A has to be shifted to the left in order to obtain a match.
  • the horizontal disparity estimation process ( 409 ) also outputs a confidence measure for each measured block disparity value.
  • the confidence measure can be a measure of the height of that peak.
  • the confidence measure can be derived from the match error, for example the reciprocal, or negative exponential, of a sum of pixel value differences between respective displaced pixel values (displaced by the disparity value) of the one-dimensional blocks.
  • data describing image feature points for example the method of describing image features described in GB-A-2474281, to determine the disparity.
  • the disparity value for a block is then the average horizontal position difference between equivalent feature points in corresponding blocks from image A and image B.
  • a confidence measure for disparity measured in this way can be derived from the number of matching feature points found in the relevant block, so that where many matching feature points are found, the confidence is high.
  • An alternative confidence value for a block which may be simpler to evaluate in some embodiments, is the standard deviation of pixel values evaluated for the combination of the pixels in image A and image B for the relevant block.
  • the disparity values from the disparity estimation process ( 409 ) are vertically filtered in a low-pass filter ( 410 ).
  • F(D I,J ) is the filtered disparity
  • D I,J is the disparity value for the block having Cartesian coordinates (I,J) such that block (1,1) is the top left block and block (10,9) is the bottom right block of the image.
  • the vertical low pass filtering can be applied as part of the disparity estimation process, for example by filtering the correlation functions or match errors prior to the determination of the disparity values for blocks.
  • the vertically filtered disparity values for blocks are then assigned to image regions at ( 412 ).
  • this allocation process is weighted according to confidence so that disparity values with high confidence contribute more, and disparity values with low confidence contribute less, to the disparity values for their respective regions.
  • the output ( 415 ) of the assignment process ( 412 ) is a disparity value for each region.
  • the confidence values for blocks are also averaged over each image region at ( 411 ) to obtain an average confidence value for each region.
  • FIG. 3 illustrates how each region is made up of a set of blocks. For simplicity, this figure does not show the overlap of the blocks, it merely shows their relative positions.
  • Region 4 comprises the combination of a top left region ( 64 ) and a top right region ( 65 ).
  • Region 5 comprises the combination of a left central region ( 66 ) and a right central region ( 67 ).
  • Region 6 comprises the combination of a bottom left region ( 68 ) and a bottom right region ( 69 ).
  • the image edge regions do not include the blocks adjacent to the left and right edges of the frame. This is to reduce the effect of ambiguous disparity values from these blocks.
  • pixels towards the top of the frame at horizontal positions 101 to 132 will contribute to both region 4 a and region 1 .
  • the assessment of stereo polarity is made by calculating the correlation between: the set of disparity values for the regions; and, a reference model that comprises a typical set of disparity values for a pair of images where image A is the left-eye image and image B is the right-eye image.
  • this correlation is a weighted correlation where contributions from regions of high confidence contribute more to the result than contributions from regions of low confidence.
  • a weighted correlation is made at ( 413 ) between a reference model input ( 414 ) and the regional confidence values ( 415 ).
  • C is the computed correlation factor
  • w k is the confidence value for region k
  • ⁇ k is the disparity for region k
  • ⁇ Mean is the mean disparity for all regions (see equation 4 below);
  • R k is the disparity for region k of the reference model
  • N is a normalisation factor (see equation 5 below).
  • N ⁇ w k ⁇ ( ⁇ k ⁇ Mean ) 2 ⁇ R k 2 [5]
  • the regional confidence weighting is optional, so that all the values w k may be set to unity if this weighting is not used.
  • the correlation factor C is a number in the range ⁇ 1.
  • the correlation result C from the weighted correlation ( 413 ) can be improved by ‘temporal’ filtering, where results from related image pairs are combined.
  • This is shown in FIG. 1 by the temporal low-pass filter ( 416 ), which could, for example, be a running average of correlation values.
  • the output from the temporal filter ( 416 ) is a stereo polarity measure ( 417 ). An output exceeding a positive threshold gives confirmation that the stereo polarity is the same as that used to determine the reference model ( 414 ); an output below a negative threshold indicates the opposite polarity; and, a value close to zero indicates that the polarity cannot be reliably determined.
  • a suitable threshold value is of the order of 0.05.
  • a suitable reference model input for the regional structure shown in FIG. 3 is:
  • Objects at infinity have a disparity equal to the inter-ocular (or inter-camera) distance, such that the left-eye image is displaced to the left relative to the right-eye image; and, the right-eye image is displaced to the right with respect to the left-eye image.
  • Objects in the plane of the display screen have no disparity.
  • Objects in front of the display screen have disparity opposite to distant objects so that the left-eye image is displaced to the right with respect to the right-eye image, and the right-eye image is displaced to the left with respect to the left-eye image.
  • the inter-image disparity is expressed as the leftward shift of the left-eye image with respect to the right-eye image, then positive disparity is indicative of distant objects, and negative disparity is indicative of very near objects.
  • the reference model may also be obtained from analysis of typical stereo images; or the model may be optimised in an iterative process that compares the result of using the model on material having known stereo polarity with that known polarity.
  • the average of the disparity values of the model, and the range of the values of the model have no effect on the correlation result.
  • these aspects of the model can be chosen to simplify implementation. This also has the advantage that the method is unaffected by the range of disparity values present, which may differ for different types of image content, and the maximum disparity value, which may vary for different intended display sizes.
  • the disparity measurements for blocks may use horizontal or two-dimensional measures of inter-block match position. Where a two dimensional measure is used, the horizontal component of the measured match position is used as the disparity value.
  • Regions may be contiguous, or may comprise separate parts of the image for which disparity data is combined.
  • Images may be spatially subsampled or oversampled in either one or two dimensions prior to disparity measurement, and the disparity measures may be spatially filtered in one or two dimensions prior to comparison with the model of disparity distribution.
  • Data from related images may be combined prior to analysis, for example by temporal filtering of a motion image sequence.
  • the model of disparity distribution may have any spatial resolution or any numerical resolution of the disparity values.

Abstract

A method of processing in an image processor a pair of images intended for stereoscopic presentation to identify left-eye and right-eye images of the pair. The method includes dividing both images of the pair into a plurality of like image regions, determining for each region a disparity value between the images of the pair to produce a set of disparity values, deriving for each region a confidence factor for the disparity value, determining a correlation parameter between the set of disparity values and a corresponding set of disparity values from a disparity model, in which the contribution of the disparity value for a region to the said correlation parameter is weighted in dependence on the confidence factor for that region, and identifying from said correlation parameter the left-eye and right-eye images of the pair, wherein the left eye and right images form a stereoscopic pair.

Description

FIELD OF INVENTION
This invention concerns the analysis of stereoscopic images and in one example to the detection and correction of errors in stereoscopic images. It may be applied to stereoscopic motion-images.
BACKGROUND OF THE INVENTION
The presentation of ‘three-dimensional’ images by arranging for the viewer's left and right eyes to see different images of the same scene is well known. Such images are typically created by a ‘stereoscopic’ camera that comprises two cameras that view the scene from respective viewpoints that are horizontally spaced apart by a distance similar to that between the left and right eyes of a viewer.
This technique has been used for ‘still’ and ‘moving’ images. There is now great interest in using the electronic image acquisition, processing, storage and distribution techniques of high-definition television for stereoscopic motion-images.
Many ways of distributing stereoscopic image sequences have been proposed, one example is the use of separate image data streams or physical transport media for the left-eye and right-eye images. Another example is the ‘side-by-side’ representation of left-eye and right-eye images in a frame or raster originally intended for a single image. Other methods include dividing the pixels of an image into two, interleaved groups and allocating one group to the left-eye image and the other group to the right-eye image, for example alternate lines of pixels can be used for the two images.
To present the viewer with the correct illusion of depth, it is essential that his or her left eye sees the image from the left side viewpoint, and vice-versa. If the left-eye and right-eye images are transposed so that the left eye sees the view of the scene from the right and the right eye sees the view from the left, there is no realistic depth illusion and the viewer will feel discomfort. This is in marked contrast to the analogous case of stereophonic audio reproduction where transposition of the left and right audio channels produces a valid, equally-pleasing (but different) auditory experience.
The multiplicity of transmission formats for stereoscopic images leads to a significant probability of inadvertent transposition of the left and right images. The wholly unacceptable viewing experience that results from transposition gives rise to a need for a method of detecting, for a given ‘stereo-pair’ of images, which is the left-eye image, and which is the right-eye image. In this specification the term ‘stereo polarity’ will be used to denote the allocation of a stereo pair of images to the two image paths of a stereoscopic image processing or display system. If the stereo polarity is correct then the viewer's left and right eyes will be presented with the correct images for a valid illusion of depth.
In a stereo-pair of images depth is represented by the difference in horizontal position—the horizontal disparity—between the representations of a particular object in the two images of the pair. Objects intended to appear in the plane of the display device have no disparity; objects behind the display plane are moved to the left in the left image, and moved to the right in the right image; and, objects in front of the display plane are moved to the right in the left image, and moved to the left in the right image.
If it were known that all (or a majority of) portrayed objects were intended to be portrayed behind the display plane, then measurement of disparity would enable the left-eye and right-eye images to be identified: in the left-eye image objects would be further to the left than in the right-eye image; and, in the right-eye image objects would be further to the right than in the left-eye image.
However, it is common for objects to be portrayed either in front of or behind the display plane; and, a constant value may be added to, or subtracted from the disparity for a pair of images as part of the process of creating a stereoscopic motion-image sequence. For these reasons a simple measurement of horizontal disparity cannot be relied upon to identify left-eye and right-eye images of a stereo pair.
Some attempts have been made to overcome this problem by making statistical assumptions about image portrayal, specifically that an object appearing lower in an image is assumed to be to the front of an object appearing higher in the image. Reference is directed in this context to U.S. Pat. No. 6,268,881 and US 2010/0060720.
SUMMARY OF THE INVENTION
In one aspect, the present invention consists in a method of processing in an image processor a pair of images intended for stereoscopic presentation to identify the left-eye and right-eye images of the pair, comprising the steps of dividing both image into a plurality of like image regions; determining for each region a disparity value between the images of the pair to produce a set of disparity values for the image pair; deriving for each region a confidence factor for the disparity value; determining a correlation parameter between the set of disparity values for the pair of images and a corresponding set of disparity values from a disparity model, in which the contribution of the disparity value for a region to the said correlation parameter is weighted in dependence on a confidence factor for that region; and identifying from said correlation parameter the left-eye and right-eye images of the pair.
In another aspect, the present invention consist in a method of processing images intended for stereoscopic presentation, comprising the steps of determining the spatial variation of disparity between images of a stereoscopic pair, comparing the determined spatial variation of disparity with a disparity model and identifying therefrom the left-eye and right-eye images of the pair; in which the disparity model represents objects at the sides of the frame as closer to the camera than objects nearer the horizontal centre of the frame.
BRIEF DESCRIPTION OF THE DRAWINGS
An example of the invention will now be described with reference to the drawings in which:
FIG. 1 shows a block diagram of an embodiment of the invention.
FIG. 2 shows the allocation of pixels of a horizontally subsampled image to horizontally overlapping blocks.
FIG. 3 shows the allocation of blocks to regions.
DETAILED DESCRIPTION OF AN EMBODIMENT
A method of identifying the stereo polarity of a pair of images intended for stereoscopic presentation will now be described. The method is based on evaluating the correlation between: the measured spatial distribution of horizontal disparity between the pair of images; and an expected model of that spatial distribution.
A block diagram of one possible implementation of this method is shown in FIG. 1. Pixel values for the respective images of a stereo pair of images are input at terminal (401) for image A and terminal (402) for image B. Typically these values will be luminance values of pixels. The two sets of pixel values are optionally filtered and down-sampled at (403) for image A and (404) for image B. In this example horizontal down-sampling by a factor of two is used, and input images having 720 pixels per line are down-sampled in the well-known manner so that they are represented by 360 pixels per line. In the description that follows, references to pixels correspond to these sub-sampled pixels.
The two images are then each identically divided, at (405) and (406), into horizontally overlapping blocks of pixels. A suitable block structure is shown in FIG. 2; it comprises a rectangular array, nine blocks high by 10 blocks wide. In this example, the input image has 576 lines and so each block is 64 lines high. The line numbers at the top and bottom of each block are shown in the Figure at (51); the top row of blocks includes parts of lines 1 to 64 and the bottom row includes parts of lines 513 to 576.
The blocks are 64 pixels wide, and overlap each other by 50%, and so the total width of the ten blocks is 352 pixels. The vertical grid of FIG. 2 corresponds to the block edges; the horizontal centre of each block corresponds with opposite edges of its neighbours. The positions of the horizontal centres of each column of blocks are shown by the numerals (52). A horizontal count of pixels is shown at (53); the first column of blocks includes pixels 5 to 68 of the relevant lines, and the last column includes pixels 293 to 356. Thus the total width of all ten columns is eight pixels less than the 360 pixels of each down-sampled line; four pixels at the start of each line, and four pixels at the end of each line, are therefore discarded.
Returning to FIG. 1, each block is vertically averaged to obtain a ‘one-dimensional block’ of 64 average pixel values, where each average value is the average of one of the 64 vertical columns of 64 pixels that comprise the block. The blocks representing image A are averaged at (407), and the blocks of image B are averaged at (408). The vertical averaging may expressed as follows:
Let the set of 4,096 pixel values for a block be Y2D(i,j)
Where: (i,j) are the Cartesian coordinates of the pixels of the block, with (1,1) at the top left corner of the block, and (64,64) at the bottom right corner of the block.
Then, the set of 64 pixel values for the corresponding one-dimensional block Y1D(i) are given by:
Y 1D(i)=ΣY 2D(i,j)÷64  [1]
Where the range of the summation is all 64 values of j.
Respective pairs of co-located one-dimensional blocks from input image A and input image B are compared in a horizontal disparity estimation process (409). The result is a measure of the horizontal displacement of the picture content of image A with respect to the content of image B at each block position. The horizontal disparity estimation process (409) can use any of the known methods of image motion estimation, for example phase-correlation or block matching. Because the process is one-dimensional, the required processing resource is modest. The disparity estimation process finds the shifted positions of one-dimensional blocks from image A that best match the corresponding one-dimensional blocks from image B. The disparity output for a block is the signed magnitude of the horizontal shift in units of the (horizontally sub-sampled) pixel pitch. The sign is positive when image A has to be shifted to the right in order to obtain a match; and, the sign is negative when image A has to be shifted to the left in order to obtain a match.
Preferably, the horizontal disparity estimation process (409) also outputs a confidence measure for each measured block disparity value. In the case of phase correlation, where the measured disparity corresponds to the position of a peak in a correlation function, the confidence measure can be a measure of the height of that peak. Where block matching is used, the confidence measure can be derived from the match error, for example the reciprocal, or negative exponential, of a sum of pixel value differences between respective displaced pixel values (displaced by the disparity value) of the one-dimensional blocks.
It is also possible to use data describing image feature points, for example the method of describing image features described in GB-A-2474281, to determine the disparity. The disparity value for a block is then the average horizontal position difference between equivalent feature points in corresponding blocks from image A and image B. In this case it will usually be preferable to use the block data prior to vertical averaging in order to identify feature points for measurement. A confidence measure for disparity measured in this way can be derived from the number of matching feature points found in the relevant block, so that where many matching feature points are found, the confidence is high.
An alternative confidence value for a block, which may be simpler to evaluate in some embodiments, is the standard deviation of pixel values evaluated for the combination of the pixels in image A and image B for the relevant block.
The disparity values from the disparity estimation process (409) are vertically filtered in a low-pass filter (410). A suitable filter is given by:
F(D I,J)=[D I,(J−1)+2D I,J +D I,(J+1)]÷4  [2]
Where: F(DI,J) is the filtered disparity; and,
DI,J is the disparity value for the block having Cartesian coordinates (I,J) such that block (1,1) is the top left block and block (10,9) is the bottom right block of the image.
Note that it is usually helpful to ‘edge pad’ the disparity values in the well-known manner prior to applying the above equation, so that the top and bottom rows of values are respectively duplicated outside the image area in order to minimise the influence of the top and bottom edges of the frame.
In some cases the vertical low pass filtering can be applied as part of the disparity estimation process, for example by filtering the correlation functions or match errors prior to the determination of the disparity values for blocks.
The vertically filtered disparity values for blocks are then assigned to image regions at (412). Preferably this allocation process is weighted according to confidence so that disparity values with high confidence contribute more, and disparity values with low confidence contribute less, to the disparity values for their respective regions. The output (415) of the assignment process (412) is a disparity value for each region. The confidence values for blocks are also averaged over each image region at (411) to obtain an average confidence value for each region.
A suitable arrangement of regions is shown in FIG. 3, which illustrates how each region is made up of a set of blocks. For simplicity, this figure does not show the overlap of the blocks, it merely shows their relative positions. There are three central regions, each comprising a contiguous block of three rows and four columns of blocks: region 1 (61) corresponds to the top central region; region 2 (62) corresponds to the middle central region; and region 3 (63) corresponds to the bottom central region.
There are also three image-edge regions, each comprising two separate sets of six blocks. (In FIG. 3, the two parts of each region have ‘a’ or ‘b’ added to the respective region number.) Region 4 comprises the combination of a top left region (64) and a top right region (65). Region 5 comprises the combination of a left central region (66) and a right central region (67). Region 6 comprises the combination of a bottom left region (68) and a bottom right region (69). As can be seen from FIG. 3, the image edge regions do not include the blocks adjacent to the left and right edges of the frame. This is to reduce the effect of ambiguous disparity values from these blocks.
Because of the horizontal overlap of the blocks, some pixels will contribute to more than one region. For example, pixels towards the top of the frame at horizontal positions 101 to 132 (as shown at (53) in FIG. 2) will contribute to both region 4 a and region 1.
The assessment of stereo polarity is made by calculating the correlation between: the set of disparity values for the regions; and, a reference model that comprises a typical set of disparity values for a pair of images where image A is the left-eye image and image B is the right-eye image. Preferably this correlation is a weighted correlation where contributions from regions of high confidence contribute more to the result than contributions from regions of low confidence. In the system of FIG. 1, a weighted correlation is made at (413) between a reference model input (414) and the regional confidence values (415).
A suitable correlation process is as follows:
C=[Σ{w kk−δMean)R k }]÷N  [3]
Where: C is the computed correlation factor;
The summation is made over all regions;
wk is the confidence value for region k;
δk is the disparity for region k;
δMean is the mean disparity for all regions (see equation 4 below);
Rk is the disparity for region k of the reference model; and,
N is a normalisation factor (see equation 5 below).
A suitable expression for the mean disparity is:
δMean=[Σδk]÷6  [4]
Where the summation is made over the six regions.
A suitable expression for the normalisation factor is:
N=Σw k×√Σ(δk−δMean)2 ×Σ√R k 2  [5]
Where the summations are all made over the six regions.
Note that typically, the reference model itself is normalised so that ΣRk is unity.
Note also that the regional confidence weighting is optional, so that all the values wk may be set to unity if this weighting is not used.
The correlation factor C is a number in the range ±1. The larger the positive result the greater the similarity of the spatial distribution of measured disparity values with the distribution of the reference model. Large positive results are therefore evidence that image A is the left-eye image, and image B is the right-eye image. Large negative results indicate similarity with an inverted version of the reference model, where the directions of the disparity values are reversed. Large negative results are therefore evidence of the opposite stereo polarity.
When processing sequences of related images, for example a motion image sequence, the correlation result C from the weighted correlation (413) can be improved by ‘temporal’ filtering, where results from related image pairs are combined. This is shown in FIG. 1 by the temporal low-pass filter (416), which could, for example, be a running average of correlation values. The output from the temporal filter (416) is a stereo polarity measure (417). An output exceeding a positive threshold gives confirmation that the stereo polarity is the same as that used to determine the reference model (414); an output below a negative threshold indicates the opposite polarity; and, a value close to zero indicates that the polarity cannot be reliably determined. A suitable threshold value is of the order of 0.05.
A suitable reference model input for the regional structure shown in FIG. 3 is:
  • Region 1: +10
  • Region 2: 0
  • Region 3: −20
  • Region 4: 0
  • Region 5: −5
  • Region 6: −25
The rationale for choosing this model is that, for typical images, the lower part of the frame is more likely to be close to the camera (e.g. the ground), and the upper part of the frame is more likely to be distant (e.g. the sky). And, objects at the sides of the frame are often closer to the camera than objects nearer the horizontal centre of the frame.
Objects at infinity have a disparity equal to the inter-ocular (or inter-camera) distance, such that the left-eye image is displaced to the left relative to the right-eye image; and, the right-eye image is displaced to the right with respect to the left-eye image. Objects in the plane of the display screen have no disparity. Objects in front of the display screen have disparity opposite to distant objects so that the left-eye image is displaced to the right with respect to the right-eye image, and the right-eye image is displaced to the left with respect to the left-eye image. Thus if the inter-image disparity is expressed as the leftward shift of the left-eye image with respect to the right-eye image, then positive disparity is indicative of distant objects, and negative disparity is indicative of very near objects.
The reference model may also be obtained from analysis of typical stereo images; or the model may be optimised in an iterative process that compares the result of using the model on material having known stereo polarity with that known polarity.
Provided that three or more regions are used, the average of the disparity values of the model, and the range of the values of the model have no effect on the correlation result. Thus these aspects of the model can be chosen to simplify implementation. This also has the advantage that the method is unaffected by the range of disparity values present, which may differ for different types of image content, and the maximum disparity value, which may vary for different intended display sizes.
This alternative method can be implemented in different ways. Block structures different from that described above can be used, and the blocks can overlap horizontally, vertically or not at all. The disparity measurements for blocks may use horizontal or two-dimensional measures of inter-block match position. Where a two dimensional measure is used, the horizontal component of the measured match position is used as the disparity value.
Regional structures differing from that shown in FIG. 3 can be used. Fewer regions simplify implementation of the process but are more likely to give inconclusive, or incorrect results. Regions may be contiguous, or may comprise separate parts of the image for which disparity data is combined.
Images may be spatially subsampled or oversampled in either one or two dimensions prior to disparity measurement, and the disparity measures may be spatially filtered in one or two dimensions prior to comparison with the model of disparity distribution.
Data from related images may be combined prior to analysis, for example by temporal filtering of a motion image sequence.
The model of disparity distribution may have any spatial resolution or any numerical resolution of the disparity values.

Claims (20)

The invention claimed is:
1. A method of processing in an image processor a pair of images intended for stereoscopic presentation to identify left-eye and right-eye images of the pair, the method comprising: dividing both images of the pair into a plurality of like image regions; determining for each region a disparity value between the images of the pair to produce a set of disparity values; deriving for each region a confidence factor for the disparity value; determining a correlation parameter between the set of disparity values and a corresponding set of disparity values from a disparity model, in which the contribution of the disparity value for a region to the said correlation parameter is weighted in dependence on the confidence factor for that region; and identifying from said correlation parameter the left-eye and right-eye images of the pair, wherein the left eye and right eye images form a stereoscopic pair.
2. A method according to claim 1 in which a horizontal disparity is evaluated for a plurality of respective pairs of co-located image blocks in the stereoscopic pair.
3. A method according to claim 2 in which image blocks overlap.
4. A method according to claim 1 in which image blocks are vertically averaged prior to determining the disparity value between the images of the pair.
5. A method according to claim 1 in which determining the disparity value between the images of the pair comprises phase correlation.
6. A method according to claim 1 in which determining the disparity value between the images of the pair comprises block matching.
7. A method according to claim 1 in which determining the disparity value between the images of the pair comprises identifying corresponding feature points in the pair of images and comparing their horizontal positions.
8. A method according to claim 1 in which a confidence value for an image block is determined from a standard deviation of values of pixels of that image block.
9. A method according to claim 1 in which a confidence value for an image block is determined from a height of a peak in a correlation function.
10. A method according to claim 1 in which a confidence value for an image block is determined from a match error comprising a sum of value differences between pixels in one image of the pair and respective shifted pixels in the other image of the pair.
11. A method according to claim 1 in which a confidence value for an image block is determined from the number of feature points in one image of the pair that match corresponding feature points in the other image of the pair.
12. A method of processing in an image processor images intended for stereoscopic presentation, the method comprising determining a spatial variation of disparity between images of a stereoscopic pair, comparing the determined spatial variation of disparity with a disparity model and identifying therefrom left-eye and right-eye images of the pair; in which the disparity model represents objects at sides of a frame as closer to the camera than objects nearer the horizontal center of the frame.
13. A non-transitory computer program product adapted to implement a method of processing in an image processor a pair of images intended for stereoscopic presentation to identify left-eye and right-eye images of the pair, the method comprising dividing both images of the pair into a plurality of like image regions; determining for each region a disparity value between the images of the pair to produce a set of disparity values for the image pair; deriving for each region a confidence factor for the measure of disparity; determining a correlation parameter between the set of disparity values and a corresponding set of disparity values from a disparity model, in which the contribution of the disparity value for a region to the said correlation parameter is weighted in dependence on the confidence factor for that region; and identifying from said correlation parameter the left-eye and right-eye images of the pair, wherein the left eye and right eye images form a stereoscopic pair.
14. A method according to claim 13 in which image blocks overlap.
15. A method according to claim 13 in which the image blocks are vertically averaged prior determining the disparity value between the images of the pair.
16. A method according to claim 13 in which determining the disparity value between images of the pair comprises phase correlation and in which a confidence value for an image block is determined from a height of a peak in a correlation function.
17. A method according to claim 13 in which determining the disparity value between images of the pair comprises block matching and in which the confidence value for an image block is determined from a match error comprising a sum of value differences between pixels in one image of the pair and respective shifted pixels in the other image of the pair.
18. A method according to claim 17 in which a confidence value for an image block is determined from the number of feature points in one image of the pair that match corresponding feature points in the other image of the pair.
19. A method according to claim 13 in which determining the disparity value comprises identifying corresponding feature points in the pair of images and comparing their horizontal positions.
20. A method according to claim 13 in which a confidence value for an image block is determined from a standard deviation of the values of the pixels of that block.
US13/415,962 2011-03-11 2012-03-09 Analysis of stereoscopic images Expired - Fee Related US8526716B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1104159.7A GB2489202B (en) 2011-03-11 2011-03-11 Analysis of stereoscopic images
GB1104159.7 2011-03-11

Publications (2)

Publication Number Publication Date
US20120230580A1 US20120230580A1 (en) 2012-09-13
US8526716B2 true US8526716B2 (en) 2013-09-03

Family

ID=43980847

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/415,962 Expired - Fee Related US8526716B2 (en) 2011-03-11 2012-03-09 Analysis of stereoscopic images

Country Status (4)

Country Link
US (1) US8526716B2 (en)
EP (1) EP2498502A2 (en)
BR (1) BR102012005515A2 (en)
GB (2) GB2489202B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150035828A1 (en) * 2013-07-31 2015-02-05 Thomson Licensing Method for processing a current image of an image sequence, and corresponding computer program and processing device
US20150279017A1 (en) * 2014-03-28 2015-10-01 Fuji Jukogyo Kabushiki Kaisha Stereo image processing device for vehicle
US9264688B2 (en) 2011-05-13 2016-02-16 Snell Limited Video processing method and apparatus for use with a sequence of stereoscopic images

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008106185A (en) * 2006-10-27 2008-05-08 Shin Etsu Chem Co Ltd Method for adhering thermally conductive silicone composition, primer for adhesion of thermally conductive silicone composition and method for production of adhesion composite of thermally conductive silicone composition
US10491915B2 (en) 2011-07-05 2019-11-26 Texas Instruments Incorporated Method, system and computer program product for encoding disparities between views of a stereoscopic image
WO2014037603A1 (en) * 2012-09-06 2014-03-13 Nokia Corporation An apparatus, a method and a computer program for image processing
US9275468B2 (en) * 2014-04-15 2016-03-01 Intel Corporation Fallback detection in motion estimation
US9713982B2 (en) 2014-05-22 2017-07-25 Brain Corporation Apparatus and methods for robotic operation using video imagery
US10194163B2 (en) 2014-05-22 2019-01-29 Brain Corporation Apparatus and methods for real time estimation of differential motion in live video
US9939253B2 (en) * 2014-05-22 2018-04-10 Brain Corporation Apparatus and methods for distance estimation using multiple image sensors
US9870617B2 (en) 2014-09-19 2018-01-16 Brain Corporation Apparatus and methods for saliency detection based on color occurrence analysis
US10197664B2 (en) 2015-07-20 2019-02-05 Brain Corporation Apparatus and methods for detection of objects using broadband signals
US10636110B2 (en) * 2016-06-28 2020-04-28 Intel Corporation Architecture for interleaved rasterization and pixel shading for virtual reality and multi-view systems

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5856843A (en) 1994-04-26 1999-01-05 Canon Kabushiki Kaisha Stereoscopic display method and apparatus
US20090219283A1 (en) * 2008-02-29 2009-09-03 Disney Enterprises, Inc. Non-linear depth rendering of stereoscopic animated images
US20100060720A1 (en) 2008-09-09 2010-03-11 Yasutaka Hirasawa Apparatus, method, and computer program for analyzing image data
US20110050857A1 (en) 2009-09-03 2011-03-03 Electronics And Telecommunications Research Institute Apparatus and method for displaying 3d image in 3d image system
US20110169818A1 (en) * 2010-01-13 2011-07-14 Sharp Laboratories Of America, Inc. Reducing viewing discomfort
US20120113093A1 (en) * 2010-11-09 2012-05-10 Sharp Laboratories Of America, Inc. Modification of perceived depth by stereo image synthesis

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2474281A (en) 2009-10-09 2011-04-13 Snell Ltd Defining image features using local spatial maxima and minima
US8842114B1 (en) * 2011-04-29 2014-09-23 Nvidia Corporation System, method, and computer program product for adjusting a depth of displayed objects within a region of a display

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5856843A (en) 1994-04-26 1999-01-05 Canon Kabushiki Kaisha Stereoscopic display method and apparatus
US6268881B1 (en) 1994-04-26 2001-07-31 Canon Kabushiki Kaisha Stereoscopic display method and apparatus
US20090219283A1 (en) * 2008-02-29 2009-09-03 Disney Enterprises, Inc. Non-linear depth rendering of stereoscopic animated images
US8228327B2 (en) * 2008-02-29 2012-07-24 Disney Enterprises, Inc. Non-linear depth rendering of stereoscopic animated images
US20100060720A1 (en) 2008-09-09 2010-03-11 Yasutaka Hirasawa Apparatus, method, and computer program for analyzing image data
US20110050857A1 (en) 2009-09-03 2011-03-03 Electronics And Telecommunications Research Institute Apparatus and method for displaying 3d image in 3d image system
US20110169818A1 (en) * 2010-01-13 2011-07-14 Sharp Laboratories Of America, Inc. Reducing viewing discomfort
US20120113093A1 (en) * 2010-11-09 2012-05-10 Sharp Laboratories Of America, Inc. Modification of perceived depth by stereo image synthesis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Search Report from the United Kingdom Intellectual Property Office for Application No. 1104159.7 dated Jun. 28, 2011 (3 pages).

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9264688B2 (en) 2011-05-13 2016-02-16 Snell Limited Video processing method and apparatus for use with a sequence of stereoscopic images
US10154240B2 (en) 2011-05-13 2018-12-11 Snell Advanced Media Limited Video processing method and apparatus for use with a sequence of stereoscopic images
US10728511B2 (en) 2011-05-13 2020-07-28 Grass Valley Limited Video processing method and apparatus for use with a sequence of stereoscopic images
US20150035828A1 (en) * 2013-07-31 2015-02-05 Thomson Licensing Method for processing a current image of an image sequence, and corresponding computer program and processing device
US10074209B2 (en) * 2013-07-31 2018-09-11 Thomson Licensing Method for processing a current image of an image sequence, and corresponding computer program and processing device
US20150279017A1 (en) * 2014-03-28 2015-10-01 Fuji Jukogyo Kabushiki Kaisha Stereo image processing device for vehicle
US9378553B2 (en) * 2014-03-28 2016-06-28 Fuji Jukogyo Kabushiki Kaisha Stereo image processing device for vehicle

Also Published As

Publication number Publication date
US20120230580A1 (en) 2012-09-13
GB2534504B (en) 2016-12-28
BR102012005515A2 (en) 2022-08-02
GB2534504A (en) 2016-07-27
EP2498502A2 (en) 2012-09-12
GB2489202A (en) 2012-09-26
GB201104159D0 (en) 2011-04-27
GB2489202B (en) 2016-06-29

Similar Documents

Publication Publication Date Title
US8526716B2 (en) Analysis of stereoscopic images
US8253740B2 (en) Method of rendering an output image on basis of an input image and a corresponding depth map
US9214052B2 (en) Analysis of stereoscopic images
CN102215423B (en) For measuring the method and apparatus of audiovisual parameter
CN101771892B (en) Image processing method and apparatus therefor
JP5150255B2 (en) View mode detection
EP3350989B1 (en) 3d display apparatus and control method thereof
KR101690297B1 (en) Image converting device and three dimensional image display device including the same
US8270768B2 (en) Depth perception
US20130009955A1 (en) Method and apparatus for correcting errors in stereo images
US9596445B2 (en) Different-view image generating apparatus and different-view image generating method
TW201322733A (en) Image processing device, three-dimensional image display device, image processing method and image processing program
CN109644280B (en) Method for generating hierarchical depth data of scene
US9251564B2 (en) Method for processing a stereoscopic image comprising a black band and corresponding device
Zhang et al. Visual comfort assessment of stereoscopic images with multiple salient objects
CN114554174B (en) Naked eye three-dimensional image measuring system, image processing method and device and display equipment
US9398291B2 (en) Autostereoscopic screen and method for reproducing a 3D image
Kim et al. Measurement of critical temporal inconsistency for quality assessment of synthesized video
KR101435611B1 (en) Occlusion removal method for three dimensional integral image
Marson et al. Horizontal resolution enhancement of autostereoscopy three-dimensional displayed image by chroma subpixel downsampling
KR20140113066A (en) Multi-view points image generating method and appararus based on occulsion area information
Park et al. 5.3: Active Light Field Rendering in Multi‐View Display Systems
Pockett et al. 37.3: the impact of barrel distortion on perception of stereoscopic scenes
KR20120072892A (en) Method and apparatus for generating anaglyph image using binocular disparity and depth information
Vatolin et al. ‘Testing methods for 3d content viewing devices

Legal Events

Date Code Title Description
AS Assignment

Owner name: SNELL LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KNEE, MICHAEL JAMES;REEL/FRAME:028240/0601

Effective date: 20120321

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: GRASS VALLEY LIMITED, GREAT BRITAIN

Free format text: CHANGE OF NAME;ASSIGNOR:SNELL ADVANCED MEDIA LIMITED;REEL/FRAME:052127/0795

Effective date: 20181101

Owner name: SNELL ADVANCED MEDIA LIMITED, GREAT BRITAIN

Free format text: CHANGE OF NAME;ASSIGNOR:SNELL LIMITED;REEL/FRAME:052127/0941

Effective date: 20160622

AS Assignment

Owner name: MGG INVESTMENT GROUP LP, AS COLLATERAL AGENT, NEW YORK

Free format text: GRANT OF SECURITY INTEREST - PATENTS;ASSIGNORS:GRASS VALLEY USA, LLC;GRASS VALLEY CANADA;GRASS VALLEY LIMITED;REEL/FRAME:053122/0666

Effective date: 20200702

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20210903

AS Assignment

Owner name: GRASS VALLEY LIMITED, UNITED KINGDOM

Free format text: TERMINATION AND RELEASE OF PATENT SECURITY AGREEMENT;ASSIGNOR:MGG INVESTMENT GROUP LP;REEL/FRAME:066867/0336

Effective date: 20240320

Owner name: GRASS VALLEY CANADA, CANADA

Free format text: TERMINATION AND RELEASE OF PATENT SECURITY AGREEMENT;ASSIGNOR:MGG INVESTMENT GROUP LP;REEL/FRAME:066867/0336

Effective date: 20240320

Owner name: GRASS VALLEY USA, LLC, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF PATENT SECURITY AGREEMENT;ASSIGNOR:MGG INVESTMENT GROUP LP;REEL/FRAME:066867/0336

Effective date: 20240320