WO2014114601A1 - Interpolation method and corresponding device - Google Patents

Interpolation method and corresponding device Download PDF

Info

Publication number
WO2014114601A1
WO2014114601A1 PCT/EP2014/051051 EP2014051051W WO2014114601A1 WO 2014114601 A1 WO2014114601 A1 WO 2014114601A1 EP 2014051051 W EP2014051051 W EP 2014051051W WO 2014114601 A1 WO2014114601 A1 WO 2014114601A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
block
values
pixel
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2014/051051
Other languages
English (en)
French (fr)
Inventor
Guillaume Boisson
Paul Kerbiriou
David NEBOUY
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to US14/762,462 priority Critical patent/US9652826B2/en
Priority to JP2015554119A priority patent/JP2016511962A/ja
Priority to KR1020157019890A priority patent/KR20150110541A/ko
Priority to EP14700931.0A priority patent/EP2948921B1/en
Priority to CN201480006020.8A priority patent/CN104969258A/zh
Publication of WO2014114601A1 publication Critical patent/WO2014114601A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals

Definitions

  • the invention relates to the field of image or video processing.
  • the invention also relates to the field of interpolation of pixel blocks and more specifically to the upsampling of a source pixel block or matrix.
  • devices which make it possible to capture various information to be associated with the pixels of an image, such as for example the grey levels or disparity information between several views of a same scene, that is to say between several images of a same scene captured according to several viewpoints.
  • the grey level information is for example captured using CCD sensors of a still camera or of a video camera and the inter-view disparity information is for example captured using a depth sensor (for example of Kinect® type) or calculated using disparity estimation software.
  • This information is stored in the maps associated with the images, for example a grey level map corresponding to a pixel matrix comprising as many pixels as the image with which it is associated, a grey level value being associated with each pixel, or a disparity map corresponding to a pixel matrix comprising as many pixels as the image with which it is associated, a disparity value being associated with each pixel.
  • Capture or estimation errors during the capture or estimation of this information cause holes to appear in the maps associated with the images, that is to say certain pixels have no associated (grey level or disparity) information or have erroneous associated information.
  • the maps sometimes have limited resolutions due to hardware limitations intrinsic to the sensors used for capturing this information or due to real-time capture constraints which prevent high- resolution capture.
  • the purpose of the invention is to overcome at least one of these disadvantages of the prior art. More specifically, the purpose of the invention is notably to provide an information map associated with a complete image and/or a better resolution.
  • the invention relates to a method of interpolating a first block of pixels into a second block of pixels, the second block comprising a number of pixels greater than the number of pixels of the first block, the first block of pixels comprising four pixels arranged in two adjacent rows and two adjacent columns, first values being associated with the pixels of the first block, the first values being determinate for three of the four pixels and indeterminate for one of the four pixels.
  • the method comprises determining second values to be associated with at least a part of interpolated pixels of the second block from the three pixels of the first block having associated first values determined from coordinates of said three pixels in the first block, from first values associated with said three pixels and from coordinates of interpolated pixels in the second block.
  • the method further comprises comprising determining an equation of a plane passing through the three pixels of the first block having associated first values determined from the coordinates of the three pixels in the first block and the first values associated with the three pixels, the second values to be associated with the at least a part of the interpolated pixels being determined from the equation of the plane and the coordinates of the interpolated pixels in the second block.
  • At least a part of the determined second values are associated with interpolated pixels positioned outside a polygon having as vertices the three pixels of the first block copied into the second block.
  • indeterminate first value is associated with at least one pixel of the second block.
  • the number of pixels of the second block with which are associated the indeterminate first value is a function of the parity of at least one of the horizontal and vertical upsampling factors used to obtain the second block of pixels by interpolation of the first block of pixels.
  • the second block corresponds to an upsampling of the first block by using a horizontal upsampling factor and a vertical upsampling factor; the method further comprises the steps of: - determining the parity of said horizontal and vertical upsampling factors,
  • the interpolation rule is chosen from among a set of rules comprising the following rules:
  • the first values are values representative of disparity.
  • the first values are values representative of grey level.
  • the invention also relates to a module for interpolating a first block of pixels into a second block of pixels, the second block comprising a number of pixels greater than the number of pixels of the first block, the first block of pixels comprising four pixels arranged in two adjacent rows and two adjacent columns, first values being associated with the pixels of the first block, the first values being determinate for three of the four pixels and indeterminate for one of the four pixels, the module comprising at least one processor configured for determining second values to be associated with at least a part of interpolated pixels of the second block from the three pixels of the first block having associated first values determined from coordinates of said three pixels in the first block, from first values associated with said three pixels and from coordinates of interpolated pixels in the second block.
  • the at least one processor is a graphics processing unit (GPU).
  • GPU graphics processing unit
  • the at least one processor is further configured for determining an equation of a plane passing through the three pixels of the first block having associated first values determined from the coordinates of the three pixels in the first block and the first values associated with the three pixels, the second values to be associated with the at least a part of the interpolated pixels being determined from the equation of the plane and the coordinates of the interpolated pixels in the second block.
  • the invention also relates to a computer program product comprising program code instructions for executing the steps of the interpolation method, when this program is executed on a computer.
  • FIG. 1 shows the generation of a second pixel matrix from a first pixel matrix, according to a particular embodiment of the invention ;
  • figure 2 shows the interpolation of a group of pixels of the first pixel matrix of figure 1 to a second group of pixels, according to a particular implementation of the invention ;
  • FIG. 3 diagrammatically shows a device implementing a method for interpolating a first pixel matrix of figure 1 , according to a particular implementation of the invention ;
  • figure 4 shows a method for interpolating the first pixel matrix of figure 1 , implemented in the device of figure 3, according to a particular embodiment of the invention.
  • Figure 1 shows the generation of a second pixel matrix 1 3 (also called second pixel block) by interpolation of a first pixel matrix 1 0 (also called first pixel block), according to a particular and non-restrictive embodiment of the invention.
  • the second pixel matrix 1 3 advantageously comprises more pixels than the first pixel matrix 1 0.
  • the resolution of the second pixel matrix 1 3 is greater than the resolution of the first pixel matrix 1 0.
  • the resolution of the first pixel matrix 1 0 is 640*360 pixels (that is to say 640 columns and 360 rows) and the resolution of the second pixel matrix 1 3 is 1 920*1 080 (that is to say 1 920 columns and 1080 rows).
  • the horizontal and vertical upsampling factors are equal.
  • the resolution of the first pixel matrix 1 0 is 640*360 pixels in interlaced mode and the resolution of the second pixel matrix 1 3 is 1 920*1 080.
  • the horizontal upsampling factor N and vertical upsampling factor M are different, M being equal to 6 and N being equal to 3.
  • the upsampling factors can be either even or odd.
  • the first matrix advantageously corresponds to a disparity map associated with an image when the values associated with the pixels of the first matrix 1 0 are representative of disparity.
  • a disparity map associated with a first image comprises the horizontal difference in pixels between a pixel of the first image (for example the left image) and the corresponding pixel of the second image (for example the right image), two corresponding pixels of the first image and the second image representing the same element of the scene.
  • the disparity values associated with the pixels of an image are for example captured by a suitable device.
  • the disparity values are estimated by comparing two images of a same scene, that is to say by matching each pixel of a first image of the scene with a corresponding pixel of the second image of the same scene (two corresponding pixels of the first image and the second image representing the same element of the scene, that is to say that a same grey level value (within a margin of error) is associated with two corresponding pixels) and by determining the difference in horizontal position between the pixel of the first image and the corresponding pixel of the second image (expressed in number of pixels).
  • the disparity value stored for these pixels in the first pixel matrix is indeterminate or unknown.
  • An indeterminate or unknown value is for example identified via a predetermined code, for example a code equal to 1 1 1 1 1 1 1 1 (OxFF in hexadecimal) when the disparity value is coded on 8 bits or 1 1 1 1 1 1 1 1 1 1 (0x07FF in hexadecimal) when the disparity value is coded on 1 1 bits.
  • the first matrix 1 0 advantageously corresponds to a grey level map associated with an image when the values associated with the pixels of the first matrix 1 0 are representative of grey levels.
  • the values representative of grey level are for example coded on 8, 1 0 or 1 2 bits, a grey level for each colour channel of the image for example being available, that is to say that a grey level map is associated with each colour channel (for example RGB).
  • the grey level value or values associated with one or more pixels of the first pixel matrix is or are erroneous (for example following a sensor measurement problem)
  • the grey level value stored for this or these pixels in the first pixel matrix is indeterminate or unknown.
  • An indeterminate or unknown value is for example identified via a predetermined code, for example a code equal to 1 1 1 1 1 1 1 or 00000000 when the disparity value is coded on 8 bits.
  • each pixel of the first and second matrices is defined by the column number C/row number L pair corresponding to the coordinates of the pixel in the pixel matrix, the position of a pixel of the first matrix being defined by the pair (d ; L-i) and the position of a pixel of the second matrix being defined by the pair (C 2 ; L 2 ).
  • the second pixel matrix 1 3 is for example obtained by bilinear interpolation of the first pixel matrix 1 0.
  • each pixel of the first matrix 1 0 is copied into a first intermediary version 1 1 of the second matrix at a position which depends on the position of the pixel in the first matrix 1 0, the horizontal upsampling factor N and the vertical upsampling factor M.
  • the position of a pixel of the first matrix 1 0 copied into the first intermediary version 1 1 of the second matrix 1 3 is obtained by the following equations:
  • l_2 M*d
  • the 4 adjacent pixels 101 , 102, 103 and 104 appearing in grey on the first matrix 10 and having respectively the following coordinates expressed in column number Ci and row number L-, (1 ; 1 ), (1 ; 2), (2; 1 ) and (2; 2), the first row and the first column of a matrix being numbered 0, the coordinates of the 4 pixels 1 1 1 , 1 12, 1 13 and 1 14 of the first version 1 1 of the second matrix corresponding respectively to the 4 pixels 101 , 102, 103 and 104 of the first matrix 10 are respectively (3; 6), (3; 12), (6; 6) and (6; 12).
  • pixels of the columns for which the value to be associated are determined by interpolation are identified in grey with white hatches on the second intermediary version 12 of the second matrix.
  • a part of the second intermediary version 12 of the second matrix is enlarged for greater legibility.
  • the enlarged part is surrounded by a dashed oval and corresponds to a part of column number 3 which comprises pixels 1 1 1 , 121 , 122, 123, 124, 125 and 1 12.
  • Pixels 1 1 1 and 1 12 correspond to pixels 101 and 102 of the first matrix 10, that is to say the value associated with each of these pixels 1 1 1 and 1 12 is a copy of the value associated respectively with pixels 101 and 102.
  • the values associated with pixels 121 to 125 are determined by interpolation of the values associated with pixels 1 1 1 and 1 12, pixels 1 1 1 and 1 12 surrounding pixels 121 to 125. Then the values of the remaining pixels (identified in white on the second intermediary version 12 of the second pixel matrix) are determined row by row by using the values of the pixels of a row surrounding the pixels for which the associated values are determined. This is shown by the second pixel matrix 13 where the pixels of the rows for which the associated values have been determined appear in grey with white hatches. As an example, a part of the second matrix 13 is enlarged for greater legibility.
  • the enlarged part is surrounded by a dashed oval and corresponds to a part of row 1 0 which comprises pixels 1 24, 1 31 , 1 32 and 1 29, the values associated with pixels 1 24 and 1 29 having been determined previously by interpolations of values from the first matrix 1 0.
  • the values associated with pixels 1 31 and 1 32 are determined by interpolation of the values associated with pixels 1 24 and 1 29, pixels 1 24 and 1 29 surrounding pixels 1 31 and 1 32.
  • interpolation coefficients are used to weight the values associated with the pixels surrounding the pixels for which the values to be associated are to be determined.
  • interpolation coefficients are advantageously determined for each pixel whose value is to be determined as a function of the vertical upsampling factor M for the pixels of the columns (as indicated on the second intermediary version 1 2 of the second matrix for the pixels appearing in grey with white hatches, for example pixels 1 21 to 1 25) and as a function of the horizontal upsampling factor N for the pixels of the rows (as indicated on the second matrix 1 3 for the pixels appearing in grey with white hatches, for example pixels 1 31 and 132).
  • the first and second vertical weighting factors are calculated from the following equations:
  • k is comprised between 1 and M-1 , k being equal to M-1 for the pixel nearest to the first pixel surrounding the pixel to be determined and 1 for the pixel the furthest from the first pixel surrounding the pixel to be determined, k being decremented by 1 as one moves away from the first pixel surrounding the pixel to be determined and as one approaches the second pixel surrounding the pixel to be determined.
  • the value V to be associated with one of pixels 121 to 125 is the average of the sum of the values associated with the first and second pixels 1 1 1 and 1 12 surrounding pixels 121 to 125 weighted by the vertical interpolation factors, namely:
  • V pixe i is the value to be associated with the pixel whose associated value we seek to determine
  • V P i X ei,i n is the value associated with the first pixel surrounding the pixel whose associated value we seek to determine
  • ⁇ ⁇ ⁇ , ⁇ ⁇ 2 is the value associated with the second pixel surrounding the pixel whose associated value we seek to determine.
  • V-121 (5/6 * V pixels 1 1 + 1 /6 * V pjxei ⁇ 12)
  • V 122 (4/6 * Vpixei.m + 2/6 * V Pixe i, 1 12 )
  • V-I23 (3/6 * V pixels 1 1 + 3/6 * V pixels 12)
  • V125 (1 /6 * V pixels 1 1 + 5/6 * V pixels 12)
  • horizontal interpolation factors are used to determine the values to be associated with pixels of a row from the values of two pixels surrounding these pixels on the row considered for which the associated values are known (either copied from the first matrix or determined by interpolation applied to a column as explained previously).
  • first and second horizontal interpolation factors are advantageously determined as a function of the horizontal upsampling factor N, namely:
  • ⁇ _ ⁇ 1 - a_x
  • k is comprised between 1 and N-1 , k being equal to N-1 for the pixel nearest to the first pixel surrounding the pixel to be determined and 1 for the pixel the furthest from the first pixel surrounding the pixel to be determined, k being decremented by 1 as one moves away from the first pixel surrounding the pixel to be determined and as one approaches the second pixel surrounding the pixel to be determined.
  • pixels 131 and 132 whose associated values are determined from the values of pixels 124 and 129 and by considering that pixel 124 is the first pixel surrounding pixels 131 and 132 and pixel 125 is the second pixel surrounding pixels 131 and 132, the values of the first and second horizontal interpolation factors applied to pixels 131 and 132 are respectively:
  • the value V to be associated with one of pixels 131 and 132 is the average of the sum of the values associated with the first and second pixels 124 and 129 surrounding pixels 131 and 132 weighted by the vertical interpolation factors, namely:
  • V pixe i is the value to be associated with the pixel whose associated value we seek to determine
  • V P ix e i i 2 4 is the value associated with the first pixel surrounding the pixel whose associated value we seek to determine
  • V P ix e i i 2 9 is the value associated with the second pixel surrounding the pixel whose associated value we seek to determine.
  • V-131 (2/3 * Vpixe
  • V-I32 (1 /3 * Vpixe
  • the bilinear interpolation method is advantageously used to determine the values of the pixels for which the associated value is not known when the values associated with the pixels surrounding the pixels whose associated value we seek to determine are known and/or have no significant variation from one pixel to another. For example, if the value associated with one or more pixels 101 to 104 of the first matrix 10 is not known, the bilinear interpolation method will not be used to determine the pixels of the second matrix 13 surrounded by the 4 pixels 1 1 1 to 1 14 of the second matrix 13 and corresponding to the 4 pixels 101 to 104 of the first matrix 10.
  • the bilinear interpolation method will not be used to determine the pixels of the second matrix 13 surrounded by the 4 pixels 1 1 1 to 1 14 of the second matrix 13 corresponding to the 4 pixels 101 to 104 of the first matrix 10.
  • the method described with regard to figure 2 will advantageously be used in order to minimise the interpolation errors.
  • Figure 2 shows the interpolation of a first group of pixels 21 , 22, 23 and 24 (called first block of pixels) to a second group of pixels 201 to 230 (called second block of pixels), according to a particular and advantageous embodiment of the invention.
  • the first block of pixels 21 to 24 corresponds for example to a block of pixels of the first pixel matrix 10 or to a first pixel matrix 20 comprising only 4 pixels.
  • the horizontal upsampling factor N is equal to 4
  • the vertical upsampling factor M is equal to 5.
  • the values associated with pixels 21 , 23 and 24 are known (either via measurements carried out by one or more suitable sensors or via any estimation method known to those skilled in the art).
  • the three pixels 21 , 23 and 24 are adjacent and are distributed over two adjacent columns and over two adjacent rows.
  • the value associated with the fourth pixel 22 is unknown or representative of a capture or estimation error.
  • the difference between the value associated with this pixel 22 and the value associated with the adjacent pixel 21 of the same row or the difference between the value associated with this pixel 22 and the value associated with the adjacent pixel 24 of the same column is greater than a threshold value.
  • a difference in values greater than a threshold value is representative of this pixel 22 on one hand and pixels 21 , 23 and 24 on the other hand belonging to two different objects of the image associated with the first block 10.
  • a significant difference in disparity between two pixels signifies that the objects to which belong these pixels 22 on one hand and 21 , 23 and 24 on the other hand belong to different depth planes.
  • a significant difference in grey level (that is to say greater than a threshold value, for example greater than 16 or 32 when the grey levels are coded on 8 bits, that is to say on a scale of 0 to 255) reveals the presence of a contour at pixels 21 to 24, that is to say that pixel 22 on one hand and pixels 21 , 23 and 24 on the other hand belong to different objects of the image.
  • the values associated with pixels 21 , 23 and 24 are copied and associated with the corresponding pixels of the second block 200, that is to say associated respectively with pixels 201 , 226 and 230.
  • the unknown value (or the value having a difference greater than the threshold value with respect to one of the values associated with pixels 21 and 24) associated with pixel 22 is also copied and associated with pixel 205 of the second block 200 corresponding to pixel 22 of the first block 20.
  • the coordinates in the X,Y reference system that is to say the row and column numbers
  • pixels 201 , 205, 226 and 230 of the second block 200 corresponding respectively to pixels 21 , 22, 23 and 24 are determined from the coordinates of pixels 21 , 22, 23 and 24 in the first block 20 and from the upsampling factors M and N.
  • the coordinates of pixels 21 , 22, 23 and 24 are thus for example respectively (0,0), (1 ,0), (0, 1 ) and (1 , 1 ) in the first block 20 and the coordinates of the corresponding pixels 201 , 205, 226 and 230 are respectively (0,0), (4,0), (0,5) and (4,5) in the second block 200.
  • the equation of the plane passing through the three pixels (of the first block 20 or equivalent ⁇ of the second block 200, pixels 21 to 24 of the first block being copied into the second block) whose associated values are known is determined.
  • the Cartesian equation of the plane has the form :
  • x and y correspond to the coordinates of the pixels of the plane (that is to say respectively the column number and the row number of a pixel of the plane);
  • z corresponds to the value associated with a pixel of the plane;
  • a, b, c and d correspond to the factors of the plane which it is possible to determine by knowing 3 pixels of the plane.
  • the value associated with pixel 22 and copied to be associated with pixel 205 of the second block 200 is assigned to other pixels of the second block for which we seek to determine the value to be associated.
  • the value of pixel 205 is propagated to the pixels belonging both to the column adjacent to the column of pixel 205 (inside the area delimited by pixels 201 , 205, 226 and 230) and to the two rows nearest to the rows comprising pixel 205. This value is thus associated with pixels 204, 209, 21 0, 214 and 215.
  • the rule or rules determining to which pixels of the second block 200 the value of pixel 205 must be assigned is (are) advantageously predetermined.
  • One of the criteria for deciding to which pixels to assign the unknown value is the parity of the (horizontal and vertical) upsampling factor.
  • the upsampling factor M or N is odd
  • the value of pixel 205 is propagated over half of the interpolated columns/rows, the number of columns/rows concerned being equal to (M - 1 )/2 or (N - 1 )/2.
  • the row comprising pixels 201 and 205 is for example numbered 0 and the row comprising pixels 226 and 230 is numbered 5.
  • the two rows comprising pixels to which are assigned the value of pixel 205 are thus the rows numbered 1 and 2.
  • the horizontal upsampling factor N being even (equal to 4 according to the example of figure 2), it is not possible to divide the number of columns of the second block 200 in two to obtain a whole number.
  • the pixels of the middle column (column numbered 2 comprising pixels 203 and 228, working on the principle that the column numbered 0 is the left-most column, that is to say the column comprising pixels 201 and 226), it is necessary to determine a priori which rule to apply to determine their associated values: either use the equation of the plane or copy the value associated with pixel 205.
  • the two rules are meaningful.
  • it is decided a priori which rule to apply this rule being programmed and applied by default subsequently during the implementation of the interpolation method described herein.
  • this rule being programmed and applied by default subsequently during the implementation of the interpolation method described herein.
  • the disparity value which corresponds to the depth furthest from the camera (from the viewpoint) will be copied and the number of pixels for which the disparity value is indeterminate will be limited.
  • the pixels of the middle row and/or column will remain indeterminate, that is to say that the values associated with these pixels will remain unknown or indeterminate.
  • the equation of the plane is of parametric type and is determined from the coordinates (x 2 3, y23, z 23 ) of a pixel denoted A (for example pixel 23) and two non-collinear vectors, for example a first vector (ui , u 2 , u 3 ) defined from the pair of pixels 23, 21 and a second vector v (v-i , v 2 , v 3 ) defined from the pair of pixels 23, 24.
  • the plane being the set of points M(x, y, z) for which there exist two scalars ⁇ and ⁇ such that:
  • Figure 3 shows diagrammatically a hardware embodiment of a device 3 suitable for the interpolation of a first pixel matrix to a second pixel matrix.
  • the device 3 corresponding for example to a personal computer, a laptop, a set-top box or an image processing module embedded in a display device.
  • the device 3 comprises the following elements, connected to each other by an address and data bus 35 which also transports a clock signal :
  • microprocessor 31 or CPU
  • a graphics card 32 comprising:
  • GRAM graphical random access memory
  • ROM Read Only Memory
  • RAM random access memory
  • I/O devices 34 such as for example a keyboard, a mouse, a webcam ;
  • the device 3 also comprises a display device 33 of display screen type directly connected to the graphics card 32 to display notably the rendering of images associated with the first pixel matrix or with the second pixel matrix.
  • a dedicated bus to connect the display device 33 to the graphics card 32 offers the advantage of having much greater data transmission bitrates and thus reducing the latency time for the display of images composed by the graphics card.
  • an apparatus for displaying images is external to the device 3 and is connected to the device 3 by a cable transmitting the display signals.
  • the device 3, for example the graphics card 32 comprises a means for transmission or connector (not shown in figure 3) suitable for transmitting a display signal to an external display means such as for example an LCD or plasma screen or a video projector.
  • register used in the description of memories 321 , 36 and 37 designates in each of the memories mentioned a memory zone of low capacity (some binary data) as well as a memory zone of large capacity (enabling storage of a whole program or all or part of the data representative of data calculated or to be displayed).
  • the microprocessor 31 When switched on, the microprocessor 31 loads and executes the instructions of the program contained in the RAM 37.
  • the random access memory 37 notably comprises:
  • - parameters 371 representative of the pixels of the first matrix, for example the coordinates of the pixels or the first values associated with the pixels.
  • the algorithms implementing the steps of the method specific to the invention and described hereafter are stored in the memory GRAM 321 of the graphics card 32 associated with the device 3 implementing these steps.
  • the graphics processors 320 of the graphics card 32 load these parameters into the GRAM 321 and execute the instructions of these algorithms in the form of microprograms of "shader" type using HLSL ("High Level Shader Language") or GLSL ("OpenGL Shading Language”) for example.
  • the random access memory GRAM 321 notably comprises:
  • the values and parameters 3210 to 3213 are stored in the RAM 37 and processed by the microprocessor 31 .
  • a part of the RAM 37 is assigned by the CPU 31 for storage of the values and parameters 3210 to 3213 if the memory storage space available in the GRAM 321 is insufficient.
  • This variant however causes greater latency time in the calculations necessary for the interpolation from microprograms contained in the GPUs as the data must be transmitted from the graphics card to the random access memory 37 via the bus 35 whose transmission capacities are generally lower than those available in the graphics card for transmission of data from the GPUs to the GRAM and vice-versa.
  • the power supply 38 is external to the device 3.
  • Figure 4 shows a method for interpolating a first pixel matrix to a second pixel matrix implemented in a device 3, according to a non-restrictive particularly advantageous embodiment of the invention.
  • the different parameters of the device 3 are updated.
  • the equation of the plane passing through three pixels of the first pixel matrix is determined, the first values associated with these three pixels being known, either by direct measurement or by estimation for other values.
  • the first values are for example received from sensors suitable for measuring them or determined from other values (for example, if the first values correspond to values representative of disparity, these first values can be determined from the grey levels associated with each pixel of two images representing a same scene according to two different viewpoints).
  • the equation of the plane is determined from the coordinates of the three pixels in the first matrix and the first values associated with these pixels.
  • the equation of the plane is determined from more than three pixels, for example 4 pixels, when the first values associated with these pixels are relatively uniform, that is to say when the variations between them are small, that is to say less than a threshold value, for example less than the threshold values described with regard to figure 2.
  • a threshold value for example less than the threshold values described with regard to figure 2.
  • the second pixel matrix comprises a number of pixels greater than the number of pixels of the first matrix, the second matrix corresponding to an upsampling of the first matrix.
  • the upsampling of the first matrix is obtained from a horizontal upsampling factor and a vertical upsampling factor.
  • the horizontal upsampling factor is different from the vertical upsampling factor.
  • the horizontal upsampling factor is equal to the vertical upsampling factor.
  • the three pixels used are adjacent and distributed over two adjacent columns and two adjacent rows.
  • the first values are advantageously representative of disparity. According to a variant, the first values are representative of grey levels.
  • second values to be associated with the interpolated pixels of the second pixel matrix are determined from the equation of the plane determined in the previous step and the coordinates of the interpolated pixels in the second matrix.
  • the interpolated pixels of the second matrix correspond advantageously to the pixels of the second matrix surrounded by the pixels of the second matrix corresponding to the pixels of the first matrix.
  • the values associated with the pixels of the second matrix corresponding to the pixels of the first matrix are equal to the first values associated with the pixels of the first matrix, these first values being copied in order to be associated with the corresponding pixels of the second matrix. In other words, the pixels of the first matrix are copied into the second matrix, only the coordinates being different between the first matrix and the second matrix for these pixels.
  • the interpolated pixels are positioned in the second matrix outside the polygon having as vertices the copies of the pixels of the first matrix in the second matrix.
  • the polygon corresponds for example to a triangle when the pixels used to determine the equation of the plane are three in number and to a quadrilateral when the pixels used to determine the equation of the plane are four in number.
  • the first matrix comprises one or more pixels for which the associated first values are unknown or indeterminate.
  • the first matrix comprises at least one pair of pixels having a difference between their associated first values greater than a predetermined threshold value.
  • the parity of the horizontal upsampling factor and the parity of the vertical upsampling factor are determined. According to the result of the determination of the parity, it is determined which interpolation rule must be applied to the interpolated pixels belonging to the middle column of the second matrix when the horizontal upsampling factors is even and to the interpolated pixels belonging to the middle row or the second matrix when the vertical upsampling factor is even.
  • the choice of the rule to be applied is advantageously made from among a plurality of interpolation rules, for example from among the following two interpolation rules:
  • the decision to use one or other of the rules for the interpolated pixels of the middle column and/or the middle row of the second pixel matrix when the horizontal and/or vertical upsampling factor is even corresponds to an arbitrary implementation choice and is determined beforehand.
  • the invention is not limited to an interpolation method but extends to the processing unit implementing the interpolation method.
  • the invention also relates to an image processing method implementing the upsampling of a source image in order to generate an upsampled image from the source image.
  • a device or apparatus implementing the interpolation method described is for example in the form of hardware components, programmable or not, in the form of one or more processors (advantageously of GPU type but also of CPU or ARM type according to variants).
  • the methods described are implemented for example in an apparatus comprising at least one processor, which refers to processing devices in general, comprising for example a computer, a microprocessor, an integrated circuit or a programmable software device.
  • Processors also comprise communication devices, such as for example computers, mobile or cellular telephones, smartphones, portable/personal digital assistants (PDAs), digital tablets or any other device enabling the communication of information between users.
  • PDAs portable/personal digital assistants
  • the embodiments of the various processes and various characteristics described previously can be implemented in various equipment or applications, for example notably in an item of equipment or applications associated with the coding of data, the decoding of data, the generation of views or images, processing texture, and any other processing of images or information representative of texture and/or information representative of depth.
  • Examples of such an item of equipment are an encoder, a decoder, a post-processor processing the outputs of a decoder, a preprocessor supplying inputs to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a mobile telephone, a PDA, a digital tablet and any other communication device.
  • the item of equipment can be mobile or on board a mobile vehicle.
  • the methods described can be implemented in the form of instructions executed by one or more processors, and such instructions can be stored on a medium that can be read by a processor or computer, such as for example an integrated circuit, any storage device such as a hard disc, an optical disc (CD or DVD), a random access memory (RAM) or a non- volatile memory (ROM).
  • the instructions form for example an application program stored in a processor-readable medium.
  • the instructions take for example the form of hardware, firmware or software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Picture Signal Circuits (AREA)
  • Television Systems (AREA)
PCT/EP2014/051051 2013-01-24 2014-01-20 Interpolation method and corresponding device Ceased WO2014114601A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US14/762,462 US9652826B2 (en) 2013-01-24 2014-01-20 Interpolation method and corresponding device
JP2015554119A JP2016511962A (ja) 2013-01-24 2014-01-20 内挿方法および対応する装置
KR1020157019890A KR20150110541A (ko) 2013-01-24 2014-01-20 보간 방법 및 대응 디바이스
EP14700931.0A EP2948921B1 (en) 2013-01-24 2014-01-20 Interpolation method and corresponding device
CN201480006020.8A CN104969258A (zh) 2013-01-24 2014-01-20 插值方法和对应设备

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1350627A FR3001318A1 (fr) 2013-01-24 2013-01-24 Procede d’interpolation et dispositif correspondant
FR1350627 2013-01-24

Publications (1)

Publication Number Publication Date
WO2014114601A1 true WO2014114601A1 (en) 2014-07-31

Family

ID=48741279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/051051 Ceased WO2014114601A1 (en) 2013-01-24 2014-01-20 Interpolation method and corresponding device

Country Status (7)

Country Link
US (1) US9652826B2 (enExample)
EP (1) EP2948921B1 (enExample)
JP (1) JP2016511962A (enExample)
KR (1) KR20150110541A (enExample)
CN (1) CN104969258A (enExample)
FR (1) FR3001318A1 (enExample)
WO (1) WO2014114601A1 (enExample)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102136850B1 (ko) * 2013-11-12 2020-07-22 삼성전자 주식회사 깊이 센서, 및 이의 동작 방법
US9892496B2 (en) * 2015-11-05 2018-02-13 Google Llc Edge-aware bilateral image processing
CN110363723B (zh) * 2019-07-16 2021-06-29 安健科技(广东)有限公司 改进图像边界效果的图像处理方法及装置
CN114519967B (zh) * 2022-02-21 2024-04-16 北京京东方显示技术有限公司 源驱动装置及其控制方法、显示系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990016035A2 (en) * 1989-06-16 1990-12-27 Eastman Kodak Company Digital image interpolator
EP0574245A2 (en) * 1992-06-11 1993-12-15 International Business Machines Corporation Method and apparatus for variable expansion and variable shrinkage of an image
EP0696017A2 (en) * 1994-08-05 1996-02-07 Hewlett-Packard Company Binary image scaling
EP0706154A1 (en) * 1993-10-04 1996-04-10 Xerox Corporation Improved image interpolation apparatus

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2795119B2 (ja) * 1993-02-03 1998-09-10 日本ビクター株式会社 多次元画像圧縮伸張方式
EP0645736B1 (en) * 1993-09-27 2003-02-05 Canon Kabushiki Kaisha Image processing apparatus
US6078331A (en) * 1996-09-30 2000-06-20 Silicon Graphics, Inc. Method and system for efficiently drawing subdivision surfaces for 3D graphics
CN101443810B (zh) 2006-05-09 2013-01-16 皇家飞利浦电子股份有限公司 向上尺度变换
US20080055338A1 (en) * 2006-08-30 2008-03-06 Ati Technologies Inc. Multi-stage edge-directed image scaling
US20110032269A1 (en) 2009-08-05 2011-02-10 Rastislav Lukac Automatically Resizing Demosaicked Full-Color Images Using Edge-Orientation Maps Formed In The Demosaicking Process
TWI423164B (zh) * 2010-05-07 2014-01-11 Silicon Motion Inc 用來產生一高品質放大影像之方法及相關裝置
CN102402781B (zh) * 2010-09-13 2014-05-14 慧荣科技股份有限公司 用来产生一高品质放大图像的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990016035A2 (en) * 1989-06-16 1990-12-27 Eastman Kodak Company Digital image interpolator
EP0574245A2 (en) * 1992-06-11 1993-12-15 International Business Machines Corporation Method and apparatus for variable expansion and variable shrinkage of an image
EP0706154A1 (en) * 1993-10-04 1996-04-10 Xerox Corporation Improved image interpolation apparatus
EP0696017A2 (en) * 1994-08-05 1996-02-07 Hewlett-Packard Company Binary image scaling

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANTHONY A. TANBAKUCHI ET AL: "Adaptive pixel defect correction", PROCEEDINGS OF SPIE, US, 14 May 2003 (2003-05-14), pages 360 - 370, XP055041179 *
DIERICKX B ET AL: "Missing pixel correction algorithm for image sensors", PROCEEDINGS OF SPIE - INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, US, vol. 3410, 19 May 1998 (1998-05-19), pages 200 - 203, XP002319253 *
SMITH P R: "BILINEAR INTERPOLATION OF DIGITAL IMAGES", ULTRAMICROSCOPY, ELSEVIER, NL, vol. 6, no. 2, 1 January 1981 (1981-01-01), pages 201 - 204, XP000560028 *

Also Published As

Publication number Publication date
US20150356709A1 (en) 2015-12-10
EP2948921B1 (en) 2018-01-10
JP2016511962A (ja) 2016-04-21
FR3001318A1 (fr) 2014-07-25
KR20150110541A (ko) 2015-10-02
CN104969258A (zh) 2015-10-07
EP2948921A1 (en) 2015-12-02
US9652826B2 (en) 2017-05-16

Similar Documents

Publication Publication Date Title
US10964066B2 (en) Method and apparatus for encoding a point cloud representing three-dimensional objects
US9159135B2 (en) Systems, methods, and computer program products for low-latency warping of a depth map
US9747720B2 (en) Method and device for processing a geometry image of a 3D scene
JP2016012354A (ja) 適応的レンダリングを行う方法及びグラフィックスシステム
US20130162625A1 (en) Displayed Image Improvement
US10404970B2 (en) Disparity search range compression
CN112017228B (zh) 一种对物体三维重建的方法及相关设备
JP2015522988A (ja) 連続座標系を活用する動き補償および動き予測
US10074211B2 (en) Method and device for establishing the frontier between objects of a scene in a depth map
US9652826B2 (en) Interpolation method and corresponding device
US8891906B2 (en) Pixel-adaptive interpolation algorithm for image upscaling
US20140354632A1 (en) Method for multi-view mesh texturing and corresponding device
EP2908527A1 (en) Device, program, and method for reducing data size of multiple images containing similar information
US9998723B2 (en) Filling disparity holes based on resolution decoupling
US20160314615A1 (en) Graphic processing device and method for processing graphic images
WO2016057908A1 (en) Hybrid block based compression
CN117475064A (zh) 材质贴图的生成方法、系统及电子设备
KR20200145669A (ko) 모션 기반의 적응형 렌더링
CN118053092A (zh) 视频处理方法和装置、芯片、存储介质及电子设备
EP3065404A1 (en) Method and device for computing motion vectors associated with pixels of an image
WO2016163020A1 (ja) フレーム補間装置、フレーム補間方法及びフレーム補間プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14700931

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2014700931

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20157019890

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 14762462

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2015554119

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE