CN108307170B - A kind of stereo-picture method for relocating - Google Patents

A kind of stereo-picture method for relocating Download PDF

Info

Publication number
CN108307170B
CN108307170B CN201711399351.4A CN201711399351A CN108307170B CN 108307170 B CN108307170 B CN 108307170B CN 201711399351 A CN201711399351 A CN 201711399351A CN 108307170 B CN108307170 B CN 108307170B
Authority
CN
China
Prior art keywords
quadrilateral
grid
coordinate position
mesh
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711399351.4A
Other languages
Chinese (zh)
Other versions
CN108307170A (en
Inventor
邵枫
沈力波
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Haijiang Aerospace Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201711399351.4A priority Critical patent/CN108307170B/en
Publication of CN108307170A publication Critical patent/CN108307170A/en
Application granted granted Critical
Publication of CN108307170B publication Critical patent/CN108307170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of stereo-picture method for relocating, it is by extracting the corresponding picture quality energy of left view point image, three-dimensional mass-energy and important content energy, and by optimization so that the corresponding gross energy of left view point image is minimum, obtain optimal similitude transformation matrix and depth value set, enable the reorientation stereo-picture obtained preferably to retain important significant semantic information, keep visual adaptability in this way, and can adaptively control the scaling of important content according to the user's choice;It is adjusted the horizontal coordinate position, vertical coordinate position and depth value of stereo-picture simultaneously, to remain the important significant information of the left view point image after reorientation, it can guarantee with the right visual point image after the reorientation according to the left view difference image acquisition after reorientation it is matched, again simultaneously so as to guarantee the comfort and sense of depth of the stereo-picture after reorientation.

Description

Three-dimensional image repositioning method
Technical Field
The present invention relates to a method for processing image signals, and more particularly, to a method for repositioning stereoscopic images.
Background
With the rapid development of the stereoscopic display technology, various terminal devices with different stereoscopic display functions are widely available, but because the stereoscopic display terminals are various and have different width/height ratio specifications, if an image with a certain width/height ratio is displayed on different stereoscopic display terminals, the image size must be adjusted first to achieve the stereoscopic display effect. Conventional image scaling methods scale by cropping or by a fixed scale, which may result in reduced content in the image or significant object deformation.
For a stereoscopic image, stretching or reducing in the horizontal or vertical direction may seriously affect the stereoscopic effect, cause a change in binocular parallax, and thus cause a change in stereoscopic depth perception, and may cause visual discomfort in severe cases, and therefore, how to scale the left viewpoint image and the right viewpoint image of the stereoscopic image to reduce image deformation; how to ensure the consistency of parallax/depth distribution of the zoomed left viewpoint image and the zoomed right viewpoint image, thereby reducing visual discomfort and enhancing depth feeling; how to adaptively control the scaling of an object to highlight salient content according to the selection of a user is a problem that needs to be researched and solved in the process of repositioning a stereoscopic image.
Disclosure of Invention
The invention aims to provide a three-dimensional image repositioning method which accords with remarkable semantic features and can effectively adjust the size of a three-dimensional image.
The technical scheme adopted by the invention for solving the technical problems is as follows: a stereoscopic image repositioning method, characterized by comprising the steps of:
① left, right, and left parallax images of a stereoscopic image of width W and height H to be processed are denoted by { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, W and H can be evenly divided by 8, L (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { L (x, y) }, R (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { R (x, y) }, dL(x, y) represents { d }LThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② divide { L (x, y) } intoEach non-overlapping quadrilateral grid with the size of 8 multiplied by 8; then all quadrilateral grids in { L (x, y) } form a set, which is marked as VL,VL={UL,kL 1 is more than or equal to k and less than or equal to M }; wherein, UL,kDenotes the kth quadrilateral mesh in { L (x, y) }, described by a set of 4 mesh vertices upper left, lower left, upper right, and lower right of the quadrilateral mesh,k is a positive integer, k is not less than 1 and not more than M, M represents the total number of quadrilateral meshes contained in L (x, y),corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,
③ calculates the top left, bottom left, top right and bottom right 4 mesh vertices of each quadrilateral mesh in { L (x, y) } respectivelyThe depth values of (2) are correspondingly recorded asThen the top left, bottom left, top right and bottom right meshes of all quadrilateral meshes in { L (x, y) } are selectedThe depth values of the grid vertices form a set, denoted as ZL,ZL={zL,kL 1 is more than or equal to k and less than or equal to M }; wherein e represents a stereoscopic image to be processedD represents the left and right viewpoints and the display of the stereoscopic image to be processedViewing distance between displays, WdRepresenting the horizontal width of the display, R representing the horizontal resolution of the display, representingIs calculated, is represented by a disparity value, z is a disparity valueL,kIn order to form a set of such components,
④ extracting a saliency map of { L (x, y) } as { SM (SM) } by using a visual saliency model based on graph theoryL(x, y) }; then according to { SML(x, y) } and { dL(x,y)},Obtain a visual saliency map of { L (x, y) }, denoted as { SL(x, y) }, will { SLThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as SL(x,y),Wherein, SML(x, y) denotes { SMLThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y),representation SMLThe weight of (x, y),denotes dLThe weight of (x, y),
⑤ denotes a set of all target quadrilateral meshes of { L (x, y) } asAnd the set of depth values of the top left, bottom left, top right and bottom right grid vertices of all target quadrilateral grids of { L (x, y) } is recorded as a setThen, according to target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, carrying out similarity transformation on each quadrilateral grid in the { L (x, y) }, so that the transformation error of the target quadrilateral grid obtained after the similarity transformation is carried out on the original quadrilateral grid and the original quadrilateral grid is minimum, a similarity transformation matrix of the target quadrilateral grid corresponding to each quadrilateral grid in the { L (x, y) }isobtained, and the U is processedL,kCorresponding target quadrilateral meshIs recorded as a similarity transformation matrixWherein,corresponding representationA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to representI-1, 2,3,4,corresponding representationThe respective depth value of the depth map is,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representation(ii) a horizontal coordinate position and a vertical coordinate position of (A)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe inverse of (1);
⑥ according to the similarity transformation matrix of the target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) } and combining the { SL(x, y) }, calculating the image quality energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the image quality energy as EQ
According to the depth value of each grid vertex of each quadrilateral grid in the { L (x, y) } and the depth value of each grid vertex of the target quadrilateral grid corresponding to each quadrilateral grid in the { L (x, y) }, calculating the three-dimensional mass energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and marking the three-dimensional mass energy as ES
Calculating all four sides in L (x, y) according to the size scaling and depth scaling of the important content selected by the userThe important content energy of the target quadrilateral mesh corresponding to the shape mesh is marked as EI
⑦, calculating the total energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the total energy as Etotal,Etotal=EQS×ESI×EI(ii) a Then solving by least squares optimizationObtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the depth values of the top left grid, the bottom left grid, the top right grid and the bottom right grid which correspond to all quadrilateral grids in the { L (x, y) } and correspondingly marking as the depth values of the top left grid, the bottom left grid, the top right grid and the bottom right grid which correspond to all quadrilateral grids in the { L (x, y) }Andthen according toCalculating a similarity transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) }, and converting U into UL,kCorresponding optimal target quadrilateral meshIs recorded as a similarity transformation matrixAnd according toCalculating a depth transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) }, and converting U into UL,kCorresponding optimal target quadrilateral meshIs recorded as a depth transformation matrixWherein λ isSAnd λIAre all weighting parameters, min () is a function taking the minimum value,represents UL,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,to representThe depth values of the top left, bottom left, top right and bottom right grid vertexes of the grid,(BL,k)Tis BL,kTranspose of (B) ((B)L,k)TBL,k)-1Is (B)L,k)TBL,kThe inverse of (a) is, corresponding representationThe respective depth values of the top left, bottom left, top right and bottom right grid vertices;
⑧ calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral mesh in the { L (x, y) } after the similarity transformation rectangular transformation according to the similarity transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) }, and converting the U into the U-shaped U-L,kThe position of the middle horizontal coordinate is x'L,kAnd hang downRectilinear coordinate position y'L,kThe correspondence of the horizontal coordinate position and the vertical coordinate position of the pixel point after the similarity transformation matrix transformation is recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after similarity transformation and rectangular transformation, acquiring a repositioned left viewpoint image, and recording the repositioned left viewpoint image as a repositioned left viewpoint imageWherein x is not less than 1'L,k≤W,1≤y'L,k≤H,X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the repositioned three-dimensional image, H is also the height of the repositioned three-dimensional image,to representThe pixel value of the pixel point with the middle coordinate position (x ', y');
and according to the depth transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) }, calculating the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) }afterthe depth value is subjected to depth transformation rectangular transformation, and converting the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) } into a UL,kThe position of the middle horizontal coordinate is x'L,kAnd vertical coordinate position y'L,kDepth value z 'of pixel point'L,kThe depth value after the transformation of the depth transformation matrix is recorded asThen, according to the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) }, obtaining a repositioned left viewpoint depth map which is recorded as a depth value after depth transformation rectangular transformationThen according toObtaining the repositioned left parallax image and recording the repositioned left parallax imageWill be provided withThe pixel value of the pixel point with the middle coordinate position (x ', y') is recorded as Wherein, B'L,k=[z'L,k 1],To representThe pixel value of the pixel point with the middle coordinate position (x ', y');
⑨ are in accordance withAndobtaining the repositioned right viewpoint image and recording asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x ', y') is recorded as Then will beAndforming a repositioned stereoscopic image; wherein x 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the repositioned three-dimensional image, H is also the height of the repositioned three-dimensional image,to representThe middle coordinate position isThe pixel value of the pixel point of (1).
E in the step ⑥QThe calculation process of (2) is as follows:
⑥ _1a, calculating the shape protection energy of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) }, and marking as ESDWherein S isL(k) Represents UL,kIs the mean of the visual saliency values of all pixels in (1), i.e. representing { S }L(x,y)And U inL,kThe symbol "| | |" is the symbol of solving euclidean distance, which is the mean of the pixel values of all the pixel points in the corresponding region;
and calculates the boundary curvature energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) },is marked as ELBWherein e isL,kRepresents UL,kOf all mesh vertices ofA matrix of edges, (e)L,k)TIs eL,kTranspose of (e) ((e)L,k)TeL,k)-1Is (e)L,k)TeL,kThe inverse of (c), the matrix of edges of all mesh vertices represented,
⑥ _2a, according to ESDAnd ELBCalculating the image quality energy E of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } gridQ,EQ=ESDLBELB(ii) a Wherein λ isLBAre weighting parameters.
E in the step ⑥SThe calculation process of (2) is as follows:
⑥ _1b, calculating the shape scaling energy of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) }, and recording the shape scaling energy as ESCWherein the symbol "| | |" is a euclidean distance-solving symbol,to representIs used to form a matrix of edges of all mesh vertices,represents UL,kIth mesh vertex of (2)The depth value of (a) is determined,to representDepth value of eL,kRepresents UL,kIs used to form a matrix of edges of all mesh vertices,
and calculating the depth control energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the depth control energy as EDC Wherein exp () represents an exponential function with a natural base e as a base, the symbol "|" is an absolute value symbol, zmaxDenotes the maximum depth value of { L (x, y) }, zminDenotes the minimum depth value of { L (x, y) }, CVZminA minimum comfortable viewing zone range is indicated,e denotes a horizontal baseline distance between the left and right viewpoints of the stereoscopic image to be processed, D denotes a viewing distance between the left and right viewpoints of the stereoscopic image to be processed and the display, η1Indicating minimum comfortable viewing angle, CVZmaxIndicating the maximum comfortable viewing zone range,η2represents a maximum comfortable viewing angle;
⑥ _2b, according to ESCAnd EDCCalculating the solid mass energy E of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } gridS,ES=ESCDCEDC(ii) a Wherein λ isDCAre weighting parameters.
E in the step ⑥IThe calculation process of (2) is as follows:wherein,a rectangular area range, x, in which important contents selected by a user are locatedi,jDenotes a horizontal coordinate position, x, of a mesh vertex, jth in the horizontal direction and ith in the vertical direction, of { L (x, y) }i,j+1Denotes a horizontal coordinate position, z, of a mesh vertex of { L (x, y) } which is j +1 th in the horizontal direction and i-th in the vertical directioni,jDenotes a depth value of a mesh vertex of { L (x, y) } that is jth in the horizontal direction and ith in the vertical direction,denotes a horizontal coordinate position of a mesh vertex in the target quadrangular mesh, of a mesh vertex which is jth in the horizontal direction and ith in the vertical direction in { L (x, y) },denotes the horizontal coordinate position of the mesh vertex in the target quadrangular mesh of the mesh vertex of j +1 th in the horizontal direction and i th in the vertical direction in { L (x, y) },denotes a depth value, s ', of a mesh vertex in the target quadrangular mesh of a mesh vertex j' th in the horizontal direction and i 'th in the vertical direction in { L (x, y) } is'xRepresenting a user-specified horizontal scaling factor,s'zRepresenting a user-specified depth scaling factor, λDSAre weighting parameters.
Compared with the prior art, the invention has the advantages that:
1) the method extracts the image quality energy, the three-dimensional quality energy and the important content energy corresponding to the left viewpoint image, and obtains the optimal similarity transformation matrix and the depth value set by optimizing so as to ensure that the obtained repositioning three-dimensional image can better reserve important significant semantic information and keep visual comfort, and the zooming ratio of the important content can be controlled in a self-adaptive manner according to the selection of a user.
2) The method simultaneously adjusts the horizontal coordinate position, the vertical coordinate position and the depth value of the three-dimensional image, thereby retaining the important significant information of the repositioned left viewpoint image, simultaneously ensuring that the repositioned left viewpoint image is matched with the repositioned right viewpoint image obtained according to the repositioned left parallax image, and further ensuring the comfort and the depth feeling of the repositioned three-dimensional image.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2a is a "red/green" view of the original stereo Image of "Image 1";
FIG. 2b is a "red/green" view of "Image 1" repositioned to 60% of the width of the original stereo Image;
FIG. 3a is a "red/green" view of the original stereo Image of "Image 2";
FIG. 3b is a "red/green" view of "Image 2" repositioned to 60% of the width of the original stereo Image;
FIG. 4a is a "red/green" view of the original stereo Image of "Image 3";
FIG. 4b is a "red/green" view of "Image 3" repositioned to 60% of the width of the original stereoscopic Image;
FIG. 5a is a "red/green" view of the original stereo Image of "Image 4";
FIG. 5b is a "red/green" view of "Image 4" repositioned to 60% of the width of the original stereo Image.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The general implementation block diagram of the stereo image repositioning method provided by the invention is shown in fig. 1, and the method comprises the following steps:
① left, right, and left parallax images of a stereoscopic image of width W and height H to be processed are denoted by { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, W and H can be evenly divided by 8, L (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { L (x, y) }, R (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { R (x, y) }, dL(x, y) represents { d }LAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
② divide { L (x, y) } intoEach non-overlapping quadrilateral grid with the size of 8 multiplied by 8; then all quadrilateral grids in { L (x, y) } form a set, which is marked as VL,VL={UL,kL 1 is more than or equal to k and less than or equal to M }; wherein, UL,kDenotes the kth quadrilateral mesh in { L (x, y) }, described by a set of 4 mesh vertices upper left, lower left, upper right, and lower right of the quadrilateral mesh,k is a positive integer, k is not less than 1 and not more than M, M represents the total number of quadrilateral meshes contained in L (x, y),corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,
③ calculates the top left, bottom left, top right and bottom right 4 mesh vertices of each quadrilateral mesh in { L (x, y) } respectivelyThe depth values of (2) are correspondingly recorded asThen the top left, bottom left, top right and bottom right meshes of all quadrilateral meshes in { L (x, y) } are selectedThe depth values of the grid vertices form a set, denoted as ZL,ZL={zL,kL 1 is more than or equal to k and less than or equal to M }; wherein e represents a stereoscopic image to be processedD represents the left and right viewpoints and the display of the stereoscopic image to be processedViewing distance between displays, WdRepresenting the horizontal width of the display and R the horizontal resolution of the display, in this embodimentTaking e 65 mm, D1200 mm, Wd750 mm and 1920 mm, the disparity value, i.e. {dLThe pixel value of the pixel point with the coordinate position of (x, y) } represents the parallax value, namely { d }L(x,y) the value of the pixel at the coordinate position of the pixel, i.e. the disparity value, i.e. dL(x, y) } inThe pixel value of the pixel point with the coordinate position, namely { d } representing the parallax valueL(x, y) } middle coordinatePixel value of pixel point of which position isL,kIn order to form a set of such components,
④ extracting a significance map of { L (x, y) } by using the existing Graph-Based Visual significance (GBVS) model, and marking the significance map as { SM (x, y) }L(x, y) }; then according to { SML(x, y) } and { dL(x, y) }, acquiring a visual saliency map of { L (x, y) }, and marking as { S }L(x, y) }, will { SLThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as SL(x,y),Wherein, SML(x, y) denotes { SMLThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y),representation SMLThe weight of (x, y),denotes dLThe weight of (x, y),in this example take
⑤ denotes a set of all target quadrilateral meshes of { L (x, y) } asAnd the set of depth values of the top left, bottom left, top right and bottom right grid vertices of all target quadrilateral grids of { L (x, y) } is recorded as a setThen, according to target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, carrying out similarity transformation on each quadrilateral grid in the { L (x, y) }, so that the transformation error of the target quadrilateral grid obtained after the similarity transformation is carried out on the original quadrilateral grid and the original quadrilateral grid is minimum, a similarity transformation matrix of the target quadrilateral grid corresponding to each quadrilateral grid in the { L (x, y) }isobtained, and the U is processedL,kCorresponding target quadrilateral meshIs recorded as a similarity transformation matrixWherein,corresponding representationA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to representI-1, 2,3,4,corresponding representationThe respective depth value of the depth map is,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representation(ii) a horizontal coordinate position and a vertical coordinate position of (A)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe inverse of (c).
⑥ when changing the size or width and height of the stereo imageIn contrast, in order to protect the important object concerned by the user from tensile deformation, the image quality needs to be maintained as much as possible in the mesh deformation process, so the method combines the similarity transformation matrix of the target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) } with the { S (S) } according to the similarity transformation matrix of the target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) } in the inventionL(x, y) }, calculating the image quality energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the image quality energy as EQ
In this embodiment, E in step ⑥QThe calculation process of (2) is as follows:
⑥ _1a, calculating the shape protection energy of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) }, and marking as ESDWherein S isL(k) Represents UL,kIs the mean of the visual saliency values of all pixels in (1), i.e. representing { S }L(x, y) } neutralization of UL,kThe sign "| | |" is the sign of solving euclidean distance, which is the mean of the pixel values of all the pixel points in the corresponding region.
And calculates the boundary curvature energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) },is marked as ELBWherein e isL,kRepresents UL,kOf all mesh vertices ofA matrix of edges, (e)L,k)TIs eL,kTranspose of (e) ((e)L,k)TeL,k)-1Is (e)L,k)TeL,kThe inverse of (c), the matrix of edges of all mesh vertices represented,
⑥ _2a, according to ESDAnd ELBCalculating the image quality energy E of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } gridQ,EQ=ESDLBELB(ii) a Wherein λ isLBFor weighting the parameters, in this example λ is takenLB=1.25。
In order to ensure visual comfort and depth sense of repositioning a stereoscopic image, the invention calculates the stereoscopic quality energy of the target quadrangular meshes corresponding to all the quadrangular meshes in the (L (x, y) } according to the depth value of each mesh vertex of each quadrangular mesh in the (L (x, y) } and the depth value of each mesh vertex of the target quadrangular meshes corresponding to each quadrangular mesh in the (L (x, y) }, and records the stereoscopic quality energy as ES
In this embodiment, E in step ⑥SThe calculation process of (2) is as follows:
⑥ _1b, calculating the shape scaling energy of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) }, and recording the shape scaling energy as ESCWherein the symbol "| | |" is a euclidean distance-solving symbol,to representIs used to form a matrix of edges of all mesh vertices,represents UL,kIth mesh vertex of (2)The depth value of (a) is determined,to representDepth value of eL,kRepresents UL,kOf all mesh verticesThe number of the arrays is determined,
and calculating the depth control energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the depth control energy as EDC Where exp () represents an exponential function with a natural base e as the base, e is 2.71828183 …, the symbol "|" is an absolute value symbol, z ismaxDenotes the maximum depth value of { L (x, y) }, zminDenotes the minimum depth value of { L (x, y) }, CVZminA minimum comfortable viewing zone range is indicated,e denotes a horizontal baseline distance between the left and right viewpoints of the stereoscopic image to be processed, D denotes a viewing distance between the left and right viewpoints of the stereoscopic image to be processed and the display, η1Indicating a minimum comfortable viewing angle, in this example η1=-1°,CVZmaxIndicating the maximum comfortable viewing zone range,η2indicating the maximum comfortable viewing angle, in this example η2=1°。
⑥ _2b, according to ESCAnd EDCCalculating the solid mass energy E of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } gridS,ES=ESCDCEDC(ii) a Wherein λ isDCFor weighting the parameters, in this example λ is takenDC=0.25。
To ensure comfort and depth perception of repositioning stereoscopic imagesAccording to the size scaling ratio and the depth scaling ratio of the important content selected by the user, the important content energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } is calculated and recorded as EI
In this embodiment, E in step ⑥IThe calculation process of (2) is as follows:wherein,a rectangular area range, x, in which important contents selected by a user are locatedi,jDenotes a horizontal coordinate position, x, of a mesh vertex, jth in the horizontal direction and ith in the vertical direction, of { L (x, y) }i,j+1Denotes a horizontal coordinate position, z, of a mesh vertex of { L (x, y) } which is j +1 th in the horizontal direction and i-th in the vertical directioni,jDenotes a depth value of a mesh vertex of { L (x, y) } that is jth in the horizontal direction and ith in the vertical direction,denotes a horizontal coordinate position of a mesh vertex in the target quadrangular mesh, of a mesh vertex which is jth in the horizontal direction and ith in the vertical direction in { L (x, y) },denotes the horizontal coordinate position of the mesh vertex in the target quadrangular mesh of the mesh vertex of j +1 th in the horizontal direction and i th in the vertical direction in { L (x, y) },denotes a depth value, s ', of a mesh vertex in the target quadrangular mesh of a mesh vertex j' th in the horizontal direction and i 'th in the vertical direction in { L (x, y) } is'xRepresenting a user-specified horizontal scaling factor, s'zRepresents a depth scaling factor specified by the user, and is taken as s 'in this embodiment'x1 and s'z1, i.e. maintaining the original size and depth of the important content, λDSFor weighting the parameters, in this example λ is takenDS=0.025。
⑦, calculating the total energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the total energy as Etotal,Etotal=EQS×ESI×EI(ii) a Then solving by least squares optimizationObtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the depth values of the top left grid, the bottom left grid, the top right grid and the bottom right grid which correspond to all quadrilateral grids in the { L (x, y) } and correspondingly marking as the depth values of the top left grid, the bottom left grid, the top right grid and the bottom right grid which correspond to all quadrilateral grids in the { L (x, y) }Andthen according toCalculating a similarity transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) }, and converting U into UL,kCorresponding optimal target quadrilateral meshIs recorded as a similarity transformation matrixAnd according toCalculating a depth transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) }, and converting U into UL,kCorresponding optimal target quadrilateralGrid meshIs recorded as a depth transformation matrixWherein λ isSAnd λIAre all weighted parameters, in this example, take λS1.5 and λI1.25, min () is the take minimum function,represents UL,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,to representThe depth values of the top left, bottom left, top right and bottom right grid vertexes of the grid,(BL,k)Tis BL,kTranspose of (B) ((B)L,k)TBL,k)-1Is (B)L,k)TBL,kThe inverse of (a) is,corresponding representationThe respective depth values of the top left, bottom left, top right and bottom right grid vertices.
⑧ calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral mesh in the { L (x, y) } after the similarity transformation rectangular transformation according to the similarity transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) }, and converting the U into the U-shaped U-L,kThe position of the middle horizontal coordinate is x'L,kAnd vertical coordinate position y'L,kThe correspondence of the horizontal coordinate position and the vertical coordinate position of the pixel point after the similarity transformation matrix transformation is recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after similarity transformation and rectangular transformation, acquiring a repositioned left viewpoint image, and recording the repositioned left viewpoint image as a repositioned left viewpoint imageWherein x is not less than 1'L,k≤W,1≤y'L,k≤H,X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the repositioned three-dimensional image, H is also the height of the repositioned three-dimensional image,to representAnd the pixel value of the pixel point with the middle coordinate position of (x ', y').
And according to the depth transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) }, calculating the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) }afterthe depth value is subjected to depth transformation rectangular transformation, and converting the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) } into a UL,kThe position of the middle horizontal coordinate is x'L,kAnd vertical coordinate position y'L,kDepth value z 'of pixel point'L,kThe depth value after the transformation of the depth transformation matrix is recorded asThen, according to the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) }, obtaining a repositioned left viewpoint depth map which is recorded as a depth value after depth transformation rectangular transformationThen according toObtaining the repositioned left parallax image and recording the repositioned left parallax imageWill be provided withThe pixel value of the pixel point with the middle coordinate position (x ', y') is recorded as Wherein, B'L,k=[z'L,k 1],To representAnd the pixel value of the pixel point with the middle coordinate position of (x ', y').
⑨ are in accordance withAndobtaining the repositioned right viewpoint image and recording asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x ', y') is recorded as Then will beAndforming a repositioned stereoscopic image; wherein x 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the repositioned three-dimensional image, H is also the height of the repositioned three-dimensional image,to representThe middle coordinate position isThe pixel value of the pixel point of (1).
To further illustrate the feasibility and effectiveness of the method of the present invention, the method of the present invention was tested.
The following experiments were performed using the method of the present invention to reposition four stereo images, Image1, Image2, Image3, and Image 4. FIG. 2a shows a "red/green" view of the original stereoscopic Image of "Image 1", and FIG. 2b shows a "red/green" view of "Image 1" repositioned to 60% of the width of the original stereoscopic Image; FIG. 3a shows a "red/green" view of the original stereoscopic Image of "Image 2", and FIG. 3b shows a "red/green" view of "Image 2" repositioned to 60% of the width of the original stereoscopic Image; FIG. 4a shows a "red/green" view of the original stereoscopic Image of "Image 3", and FIG. 4b shows a "red/green" view of "Image 3" repositioned to 60% of the width of the original stereoscopic Image; fig. 5a shows a "red/green" view of the original stereoscopic Image of "Image 4", and fig. 5b shows a "red/green" view of "Image 4" repositioned to 60% of the width of the original stereoscopic Image. As can be seen from fig. 2a to 5b, the repositioned stereoscopic image obtained by the method of the present invention can better retain important significant semantic information, and can also ensure the consistency of the left viewpoint image and the right viewpoint image.

Claims (1)

1. A stereoscopic image repositioning method, characterized by comprising the steps of:
① left, right, and left parallax images of a stereoscopic image of width W and height H to be processed are denoted by { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, W and H can be evenly divided by 8, L (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { L (x, y) }, R (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { R (x, y) }, dL(x, y) represents { d }LThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② divide { L (x, y) } intoEach non-overlapping quadrilateral grid with the size of 8 multiplied by 8; then all quadrilateral grids in { L (x, y) } form a set, which is marked as VL,VL={UL,kL 1 is more than or equal to k and less than or equal to M }; wherein, UL,kDenotes the kth quadrilateral mesh in { L (x, y) }, described by a set of 4 mesh vertices upper left, lower left, upper right, and lower right of the quadrilateral mesh,k is a positive integer, k is not less than 1 and not more than M, M represents the total number of quadrilateral meshes contained in L (x, y), corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,
③ calculating respective depth values of the top-left, bottom-left, top-right and bottom-right 4 mesh vertices of each quadrilateral mesh in { L (x, y) }, the depth values will be calculatedRespective depth value correspondence is noted Then, the depth values of the top left grid vertex, the bottom left grid vertex, the top right grid vertex and the bottom right grid vertex of all the quadrilateral grids in the { L (x, y) } form a set, and are marked as ZL,ZL={zL,kL 1 is more than or equal to k and less than or equal to M }; where e denotes a horizontal baseline distance between left and right viewpoints of the stereoscopic image to be processed, D denotes a viewing distance between the left and right viewpoints of the stereoscopic image to be processed and the display, WdRepresenting the horizontal width of the display, R the horizontal resolution of the display,to representThe value of the disparity of (a) to (b),to representThe value of the disparity of (a) to (b),to representThe value of the disparity of (a) to (b),to representOf the parallax value zL,kIs composed ofThe set of components is composed of a plurality of groups,
④ extracting a saliency map of { L (x, y) } as { SM (SM) } by using a visual saliency model based on graph theoryL(x, y) }; then according to { SML(x, y) } and { dL(x, y) }, acquiring a visual saliency map of { L (x, y) }, and marking as { S }L(x, y) }, will { SLThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as SL(x,y),Wherein, SML(x, y) denotes { SMLThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y),representation SMLThe weight of (x, y),denotes dLThe weight of (x, y),
⑤ denotes a set of all target quadrilateral meshes of { L (x, y) } as And the set of depth values of the top left, bottom left, top right and bottom right grid vertices of all target quadrilateral grids of { L (x, y) } is recorded as a set Then, according to target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, carrying out similarity transformation on each quadrilateral grid in the { L (x, y) }, so that the transformation error of the target quadrilateral grid obtained after the similarity transformation is carried out on the original quadrilateral grid and the original quadrilateral grid is minimum, a similarity transformation matrix of the target quadrilateral grid corresponding to each quadrilateral grid in the { L (x, y) }isobtained, and the U is processedL,kCorresponding target quadrilateral meshIs recorded as a similarity transformation matrix Wherein, corresponding representationA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to representI-1, 2,3,4, corresponding representationThe respective depth value of the depth map is, andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representation(ii) a horizontal coordinate position and a vertical coordinate position of (A)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe inverse of (1);
⑥ according to each quadrangle in { L (x, y) }Similarity transformation matrix of target quadrilateral grids corresponding to the grids and combining the SL(x, y) }, calculating the image quality energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the image quality energy as EQ
According to the depth value of each grid vertex of each quadrilateral grid in the { L (x, y) } and the depth value of each grid vertex of the target quadrilateral grid corresponding to each quadrilateral grid in the { L (x, y) }, calculating the three-dimensional mass energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and marking the three-dimensional mass energy as ES
According to the size scaling ratio and the depth scaling ratio of the important content selected by the user, calculating the important content energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the important content energy as EI
⑦, calculating the total energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the total energy as Etotal,Etotal=EQS×ESI×EI(ii) a Then solving by least squares optimizationObtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the depth values of the top left grid, the bottom left grid, the top right grid and the bottom right grid which correspond to all quadrilateral grids in the { L (x, y) } and correspondingly marking as the depth values of the top left grid, the bottom left grid, the top right grid and the bottom right grid which correspond to all quadrilateral grids in the { L (x, y) }And then according toCalculating a similarity transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) }, and converting U into UL,kCorresponding optimal target quadrilateral meshIs recorded as a similarity transformation matrixAnd according toCalculating a depth transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) }, and converting U into UL,kCorresponding optimal target quadrilateral meshIs recorded as a depth transformation matrix Wherein λ isSAnd λIAre all weighting parameters, min () is a function taking the minimum value,represents UL,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,to representThe depth values of the top left, bottom left, top right and bottom right grid vertexes of the grid,(BL,k)Tis BL,kTranspose of (B) ((B)L,k)TBL,k)-1Is (B)L,k)TBL,kThe inverse of (a) is, corresponding representationThe respective depth values of the top left, bottom left, top right and bottom right grid vertices;
⑧ calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral mesh in the { L (x, y) } after the similarity transformation rectangular transformation according to the similarity transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) }, and converting the U into the U-shaped U-L,kThe position of the middle horizontal coordinate is x'L,kAnd vertical coordinate position y'L,kThe correspondence of the horizontal coordinate position and the vertical coordinate position of the pixel point after the similarity transformation matrix transformation is recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after similarity transformation and rectangular transformation, acquiring a repositioned left viewpoint image, and recording the repositioned left viewpoint image as a repositioned left viewpoint imageWherein x is not less than 1'L,k≤W,1≤y'L,k≤H,X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the repositioned three-dimensional image, H is also the height of the repositioned three-dimensional image,to representThe pixel value of the pixel point with the middle coordinate position (x ', y');
and according to the depth transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) }, calculating the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) }afterthe depth value is subjected to depth transformation rectangular transformation, and converting the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) } into a UL,kThe position of the middle horizontal coordinate is x'L,kAnd vertical coordinate position y'L,kDepth value z 'of pixel point'L,kThe depth value after the transformation of the depth transformation matrix is recorded as Then, according to the depth value of each pixel point in each quadrilateral mesh in the { L (x, y) }, obtaining a repositioned left viewpoint depth map which is recorded as a depth value after depth transformation rectangular transformationThen according toObtaining the repositioned left parallax image and recording the repositioned left parallax imageWill be provided withThe pixel value of the pixel point with the middle coordinate position (x ', y') is recorded as Wherein, B'L,k=[z'L,k 1],To representThe pixel value of the pixel point with the middle coordinate position (x ', y');
⑨ are in accordance withAndobtaining the repositioned right viewpoint image and recording asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x ', y') is recorded as Then will beAndforming a repositioned stereoscopic image; wherein x 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the repositioned three-dimensional image, H is also the height of the repositioned three-dimensional image,to representThe middle coordinate position isThe pixel value of the pixel point of (1);
e in the step ⑥QThe calculation process of (2) is as follows:
⑥ _1a, calculating the shape protection energy of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) }, and marking as ESDWherein S isL(k) Represents UL,kIs the mean of the visual saliency values of all pixels in (1), i.e. representing { S }L(x, y) } neutralization of UL,kThe symbol "| | |" is the symbol of solving euclidean distance, which is the mean of the pixel values of all the pixel points in the corresponding region;
and calculating the boundary curvature energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the boundary curvature energy as ELBWherein e isL,kRepresents UL,kIs used to form a matrix of edges of all mesh vertices,(eL,k)Tis eL,kTranspose of (e) ((e)L,k)TeL,k)-1Is (e)L,k)TeL,kThe inverse of (a) is,to representIs used to form a matrix of edges of all mesh vertices,
⑥ _2a, according to ESDAnd ELBCalculating the image quality energy E of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } gridQ,EQ=ESDLBELB(ii) a Wherein λ isLBIs a weighting parameter;
e in the step ⑥SThe calculation process of (2) is as follows:
⑥ _1b, calculating the shape scaling energy of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) }, and recording the shape scaling energy as ESCWherein the symbol "| | |" is a euclidean distance-solving symbol,to representIs used to form a matrix of edges of all mesh vertices, represents UL,kIth mesh vertex of (2)The depth value of (a) is determined,to representDepth value of eL,kRepresents UL,kIs used to form a matrix of edges of all mesh vertices,
and calculating the depth control energy of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }, and recording the depth control energy as EDC Wherein exp () represents an exponential function with a natural base e as a base, the symbol "|" is an absolute value symbol, zmaxDenotes the maximum depth value of { L (x, y) }, zminDenotes the minimum depth value of { L (x, y) }, CVZminA minimum comfortable viewing zone range is indicated,e denotes a horizontal baseline distance between the left and right viewpoints of the stereoscopic image to be processed, D denotes a viewing distance between the left and right viewpoints of the stereoscopic image to be processed and the display, η1Indicating minimum comfortable viewing angle, CVZmaxIndicating the maximum comfortable viewing zone range,η2represents a maximum comfortable viewing angle;
⑥ _2b, according to ESCAnd EDCCalculating the solid mass energy E of the target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } gridS,ES=ESCDCEDC(ii) a Wherein λ isDCIs a weighting parameter;
e in the step ⑥IThe calculation process of (2) is as follows:wherein,a rectangular area range, x, in which important contents selected by a user are locatedi,jDenotes a horizontal coordinate position, x, of a mesh vertex, jth in the horizontal direction and ith in the vertical direction, of { L (x, y) }i,j+1Denotes a horizontal coordinate position, z, of a mesh vertex of { L (x, y) } which is j +1 th in the horizontal direction and i-th in the vertical directioni,jDenotes a depth value of a mesh vertex of { L (x, y) } that is jth in the horizontal direction and ith in the vertical direction,denotes a horizontal coordinate position of a mesh vertex in the target quadrangular mesh, of a mesh vertex which is jth in the horizontal direction and ith in the vertical direction in { L (x, y) },denotes the horizontal coordinate position of the mesh vertex in the target quadrangular mesh of the mesh vertex of j +1 th in the horizontal direction and i th in the vertical direction in { L (x, y) },denotes a depth value, s ', of a mesh vertex in the target quadrangular mesh of a mesh vertex j' th in the horizontal direction and i 'th in the vertical direction in { L (x, y) } is'xRepresenting a user-specified horizontal scaling factor, s'zRepresenting a user-specified depth scaling factor, λDSAre weighting parameters.
CN201711399351.4A 2017-12-22 2017-12-22 A kind of stereo-picture method for relocating Active CN108307170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711399351.4A CN108307170B (en) 2017-12-22 2017-12-22 A kind of stereo-picture method for relocating

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711399351.4A CN108307170B (en) 2017-12-22 2017-12-22 A kind of stereo-picture method for relocating

Publications (2)

Publication Number Publication Date
CN108307170A CN108307170A (en) 2018-07-20
CN108307170B true CN108307170B (en) 2019-09-10

Family

ID=62870859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711399351.4A Active CN108307170B (en) 2017-12-22 2017-12-22 A kind of stereo-picture method for relocating

Country Status (1)

Country Link
CN (1) CN108307170B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126243B (en) * 2019-12-19 2023-04-07 北京科技大学 Image data detection method and device and computer readable storage medium
CN112561993B (en) * 2020-12-07 2023-04-28 宁波大学 Stereo image repositioning method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574404A (en) * 2015-01-14 2015-04-29 宁波大学 Three-dimensional image relocation method
CN106504186A (en) * 2016-09-30 2017-03-15 天津大学 A kind of stereo-picture reorientation method
CN106570900A (en) * 2016-10-11 2017-04-19 宁波大学 Three-dimensional image relocation method
CN107105214A (en) * 2017-03-16 2017-08-29 宁波大学 A kind of 3 d video images method for relocating

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006221603A (en) * 2004-08-09 2006-08-24 Toshiba Corp Three-dimensional-information reconstructing apparatus, method and program
US8494254B2 (en) * 2010-08-31 2013-07-23 Adobe Systems Incorporated Methods and apparatus for image rectification for stereo display

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574404A (en) * 2015-01-14 2015-04-29 宁波大学 Three-dimensional image relocation method
CN106504186A (en) * 2016-09-30 2017-03-15 天津大学 A kind of stereo-picture reorientation method
CN106570900A (en) * 2016-10-11 2017-04-19 宁波大学 Three-dimensional image relocation method
CN107105214A (en) * 2017-03-16 2017-08-29 宁波大学 A kind of 3 d video images method for relocating

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于人眼视觉注意力的三维视频重定向方法;林文崇等;《光电子.激光》;20160315;第27卷(第3期);全文

Also Published As

Publication number Publication date
CN108307170A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
CN107105214B (en) A kind of 3 d video images method for relocating
CN102741879B (en) Method for generating depth maps from monocular images and systems using the same
US7903111B2 (en) Depth image-based modeling method and apparatus
CN106570900B (en) A kind of stereo-picture method for relocating
CN102098528B (en) Method and device for converting planar image into stereoscopic image
CN102905145B (en) Stereoscopic image system, image generation method, image adjustment device and method thereof
CN104574404B (en) A kind of stereo-picture method for relocating
KR101584115B1 (en) Device for generating visual attention map and method thereof
CN101610425A (en) A kind of method and apparatus of evaluating stereo image quality
JP5862635B2 (en) Image processing apparatus, three-dimensional data generation method, and program
CN108307170B (en) A kind of stereo-picture method for relocating
CN101662695B (en) Method and device for acquiring virtual viewport
CN110719453B (en) Three-dimensional video clipping method
CN103826114A (en) Stereo display method and free stereo display apparatus
KR20170025214A (en) Method for Multi-view Depth Map Generation
CN108810512B (en) A kind of object-based stereo-picture depth method of adjustment
CN112449170B (en) Stereo video repositioning method
CN108124148A (en) A kind of method and device of the multiple view images of single view video conversion
CN105791798B (en) A kind of 4K based on GPU surpasses the real-time method for transformation of multiple views 3D videos and device
CN109257591A (en) Based on rarefaction representation without reference stereoscopic video quality method for objectively evaluating
CN110149509B (en) Three-dimensional video repositioning method
CN109413404B (en) A kind of stereo-picture Zooming method
CN108833876B (en) A kind of stereoscopic image content recombination method
CN109741465B (en) Image processing method and device and display device
CN112702590B (en) Three-dimensional image zooming method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230825

Address after: 230000 Room 203, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Hefei Jiuzhou Longteng scientific and technological achievement transformation Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20230825

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240129

Address after: Room 604, Building 57, No. 1 Jizuoyuan East Street, Qinhuai District, Nanjing City, Jiangsu Province, 210000

Patentee after: Mao Yi

Country or region after: China

Patentee after: Xia Ling

Patentee after: Yang Yi

Patentee after: Zhang Tingting

Address before: 230000 Room 203, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Hefei Jiuzhou Longteng scientific and technological achievement transformation Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240222

Address after: 2nd Floor, Hualuogeng Innovation Center Office Building, No. 90 Jintan Avenue, Jintan District, Changzhou City, Jiangsu Province, 215000

Patentee after: Jiangsu Haijiang Aerospace Technology Co.,Ltd.

Country or region after: China

Address before: Room 604, Building 57, No. 1 Jizuoyuan East Street, Qinhuai District, Nanjing City, Jiangsu Province, 210000

Patentee before: Mao Yi

Country or region before: China

Patentee before: Xia Ling

Patentee before: Yang Yi

Patentee before: Zhang Tingting