CN112702590B - Three-dimensional image zooming method - Google Patents
Three-dimensional image zooming method Download PDFInfo
- Publication number
- CN112702590B CN112702590B CN202011417735.6A CN202011417735A CN112702590B CN 112702590 B CN112702590 B CN 112702590B CN 202011417735 A CN202011417735 A CN 202011417735A CN 112702590 B CN112702590 B CN 112702590B
- Authority
- CN
- China
- Prior art keywords
- coordinate position
- quadrilateral
- mesh
- vertex
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000009466 transformation Effects 0.000 claims abstract description 55
- 239000011159 matrix material Substances 0.000 claims abstract description 41
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 32
- 238000005457 optimization Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 15
- 241000287196 Asthenes Species 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 4
- 241000764238 Isis Species 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000007654 immersion Methods 0.000 abstract description 3
- 239000000284 extract Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000035807 sensation Effects 0.000 description 2
- 239000000956 alloy Substances 0.000 description 1
- 229910045601 alloy Inorganic materials 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a three-dimensional image zooming method, which extracts coordinate offset energy of target quadrilateral grids corresponding to all quadrilateral grids falling on a boundary of an object selected by a user, background holding energy of target quadrilateral grids corresponding to all quadrilateral grids falling in a background area, size control energy and left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling in the object selected by the user, minimizes the total energy through optimization, further obtains an optimal target quadrilateral grid corresponding to each quadrilateral grid in left and right viewpoint images, and obtains a zoomed three-dimensional image according to an affine transformation matrix of each optimal target quadrilateral grid; the zoom stereoscopic image has the advantages that the zoom stereoscopic image retains the accurate object shape and the accurate target focusing depth, has the immersion feeling of close-distance watching, has higher depth feeling and can obtain higher three-dimensional experience quality.
Description
Technical Field
The present invention relates to a method for processing an image signal, and more particularly, to a method for zooming a stereoscopic image.
Background
With the rapid development of 3D technology, stereoscopic images and stereoscopic videos have been increasingly noticed and favored by people. Especially, with the development of mobile phones, tablets and personal computers, the display of the mobile terminal is more and more popular with users. However, when a stereoscopic image and a stereoscopic video are displayed on the mobile terminal screen, the stereoscopic sensation may be reduced or even disappear, and a movie manufacturer attempts to increase the stereoscopic sensation of a specific object by adjusting the size and depth of the specific object to focus the viewer on the specific object. Therefore, for displaying a stereoscopic image and a stereoscopic video on a mobile terminal screen, the attention and the sense of depth of an object can be enhanced by adjusting the depth of focus of a camera.
In adjusting the depth of focus of a stereoscopic image, there are roughly two methods: depth adjustment using the depth map and depth adjustment without the depth map. The former method requires an accurate depth map and generates a depth-adjusted stereoscopic image using a virtual viewpoint rendering technique; the latter method achieves the purpose of depth adjustment directly by moving pixels in a stereoscopic image, however, the method often generates a void or causes object deformation after depth adjustment, and therefore, how to reduce image deformation of the stereoscopic image after depth adjustment and how to control adjustment of the object according to a camera focal length specified by a user to highlight significant content are problems that need to be researched and solved in the process of adjusting the focusing depth of the stereoscopic image.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a stereo image zooming method, wherein a zoomed stereo image has an immersion feeling of close-distance viewing, has a higher depth feeling and can obtain higher three-dimensional experience quality.
The technical scheme adopted by the invention for solving the technical problems is as follows: a stereoscopic image zooming method characterized by comprising the steps of:
the method comprises the following steps: left, right, and left parallax images of a stereoscopic image of width W and height H to be processed are correspondingly denoted as { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein, W and H can be divided by 2, x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, L (x, y) represents the pixel value of the pixel point with (x, y) as the coordinate position in { L (x, y) }, { R (x, y) represents the pixel value of the pixel point with (x, y) as the coordinate position in { R (x, y) }, and dL(x, y) represents { d }LThe pixel value of the pixel point with the coordinate position of (x, y) in (x, y) };
step two: adopting an SIFT-Flow method to establish a matching relation between { L (x, y) } and { R (x, y) }, obtaining an SIFT-Flow vector of each pixel point in the { L (x, y) }, and marking the SIFT-Flow vector of the pixel point with the coordinate position (x, y) in the { L (x, y) }asvL(x,y),Wherein,for the purpose of indicating the horizontal direction,for the purpose of indicating the vertical direction of the,denotes vLThe horizontal offset of (x, y),denotes vL(x, y) vertical offset;
step three: let the coordinate position of the image principal point of { L (x, y) } be noted asLet the coordinate position of the image principal point of { R (x, y) } be noted asThen according to the coordinate position in { L (x, y) } asThe pixel point of (1) is the SIFT-Flow vector of the image principal point of (L (x, y) }, and the coordinate position in (L (x, y) } is determined to beThe pixel point of (1) is the pixel point matched with the image principal point of { L (x, y) } in { R (x, y) }, and the coordinate position of the matched pixel point is recorded asAnd calculates the vertical deviation of { L (x, y) } and { R (x, y) }, denoted as b,wherein,denotes a coordinate position of { L (x, y) } in L (x, y) } isIs the SIFT-Flow vector of the image principal point of { L (x, y) }The amount of horizontal offset of (a),denotes a coordinate position of { L (x, y) } asIs the SIFT-Flow vector of the image principal point of { L (x, y) }A vertical offset of (d);
step four: let the focal lengths of { L (x, y) } and { R (x, y) } be f0Object distances of { L (x, y) } and { R (x, y) } are denoted asThen, according to the focal length f specified by the user, the magnification of { L (x, y) } and { R (x, y) } is calculated, denoted as a,where θ is the focal length f specified by the user and the object distance of { L (x, y) } and { R (x, y) }The determined image distance is determined based on the image distance,θ0is a focal length f of { L (x, y) } and { R (x, y) }0And object distances of { L (x, y) } and { R (x, y) }The determined image distance is determined based on the image distance,
step five: dividing { L (x, y) } into M quadrilateral grids which do not overlap with each other and have the size of 22 multiplied by 22, and marking the kth quadrilateral grid in { L (x, y) } as UL,k(ii) a Then according to all quadrilateral meshes in { L (x, y) } and { dL(x, y) }, obtaining { R (x, y) }) All the quadrilateral grids 22 × 22 in size, which do not overlap with each other, in (R (x, y) }, the kth quadrilateral grid is marked as UR,k(ii) a Wherein,(symbol)is a rounded-down operation sign, k is a positive integer, k is more than or equal to 1 and less than or equal to M, UL,kDescribed by its set of 4 mesh vertices above left, below left, above right and below right, corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withLevel of (2)Coordinate positionAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,UR,kdescribed by its set of 4 mesh vertices above left, below left, above right and below right, corresponds to and represents UR,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that,represents { dL(x, y) } coordinate position ofThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, represents { dL(x, y) } coordinate position ofThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, represents { dLThe (x, y) } coordinate position isThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, represents { d }L(x, y) } coordinate position ofThe pixel value of the pixel point of (1);
step six: calculating a desired grid for each quadrilateral grid in { L (x, y) } from the magnification a of { L (x, y) } and { R (x, y) } and the vertical deviation b of { L (x, y) } and { R (x, y) }, calculating U for each quadrilateral grid in { L (x, y) }, and calculating U for each quadrilateral grid in { L (x, y) }L,kIs marked asSimilarly, a desired grid of each quadrangular grid in { R (x, y) } is calculated from the magnification a of { L (x, y) } and { R (x, y) } and the vertical deviation b of { L (x, y) } and { R (x, y) }, U is calculatedR,kIs marked asWherein,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective vertices of the desired mesh are,to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinatePosition ofTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective vertices of the desired mesh are,to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that,
step seven: each quadrilateral mesh in { L (x, y) } corresponds to a target quadrilateral mesh, and U is addedL,kThe corresponding target quadrilateral mesh is notedSimilarly, each quadrilateral mesh in { R (x, y) } corresponds to a target quadrilateral mesh, and U is addedR,kThe corresponding target quadrilateral mesh is notedWherein,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective target mesh vertex is positioned at the edge of the mesh,to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationAs the 1 st mesh vertex, as the 2 nd mesh vertex, as the 3 rd mesh vertex, as the 4 th mesh vertexLower right grid vertex, also corresponding to the representationThe respective corresponding target mesh vertices are,to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way,
step eight: a user manually selects an object in a to-be-processed stereo image through editing operation; then, according to the desired grids of all quadrilateral grids in { L (x, y) } and { R (x, y) } falling in the object selected by the user, the coordinate offset energy of the target quadrilateral grid corresponding to all quadrilateral grids in { L (x, y) } and { R (x, y) } falling in the object selected by the user is calculated, and is recorded as Eobject,Wherein the symbol "| | |" is a euclidean distance-solving symbol, t is a positive integer, t is 1,2,3,4,to representThe t-th mesh vertex of (2),a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the user-selected object in L (x, y),to representThe t-th mesh vertex of (2),to representThe t-th mesh vertex of (2),a set of mesh vertices representing target quadrilateral meshes of R (x, y) corresponding to all quadrilateral meshes falling within the user-selected object,to representThe t-th mesh vertex of (1);
step nine: calculating coordinate offset energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes of { L (x, y) } and { R (x, y) } falling at the user-selected object boundary according to the expected meshes of all quadrilateral meshes of { L (x, y) } and { R (x, y) } falling at the user-selected object boundary, wherein the coordinate offset energy is denoted as Eedge,Wherein,a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes of { L (x, y) } that fall within the user-selected object boundary,a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes that fall within the user-selected object boundary in { R (x, y) };
step ten: calculating the background holding energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background area according to the expected meshes of all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background areaback,Wherein the background area is an area except the area where the object selected by the user is located in the to-be-processed stereo image,a set of mesh vertices representing the target quadrilateral meshes corresponding to all quadrilateral meshes falling within the background region in { L (x, y) },a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the background region in { R (x, y) };
step eleven: calculating the size control energy of the target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the size control energy as Eimport,Wherein, denotes a mesh vertex of j-th in the horizontal direction and i-th in the vertical direction in { L (x, y) },denotes a mesh vertex of j +1 th in the horizontal direction and i-th in the vertical direction in { L (x, y) },to representThe corresponding target mesh vertex is set to be,representThe corresponding vertex of the target mesh is set,denotes a mesh vertex of jth in the horizontal direction and ith in the vertical direction among { R (x, y) },denotes a mesh vertex of (R (x, y) } that is j +1 th in the horizontal direction and i-th in the vertical direction,representThe corresponding target mesh vertex is set to be,representCorresponding target mesh vertex, s represents a user-specified scaling factor;
step twelve: calculating left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling in the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the left-right consistency energy as Edepth, Wherein,representThe position of the horizontal coordinate of (a),representThe position of the horizontal coordinate of (a),to representIs measured in the vertical coordinate position of the optical system,representIs measured in the vertical coordinate position of the optical system,represents { dLThe (x, y) } coordinate position isThe pixel value of the pixel point of (a),represents UR,kT-th mesh vertex of (1)The position of the horizontal coordinate of (a),represents UR,kT mesh vertex ofE represents the horizontal baseline distance between the left viewpoint and the right viewpoint of the stereoscopic image to be processed;
step thirteen: according to Eobject、Eedge、Eback、EimportAnd EdepthCalculating the total energy of the target quadrilateral grids corresponding to all the quadrilateral grids in the { L (x, y) } and the { R (x, y) }, and recording the total energy as Etotal,Etotal=λ1Eobject+λ2Eedge+λ3Eback+λ4Eimport+λ5Edepth(ii) a Then solving by least square optimizationObtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) }, and recording the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }asthe optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) } correspondinglyAnd then, an affine transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) } is calculated, and U is calculatedL,kCorresponding optimal target quadrilateral meshOf (b)The transformation matrix is denoted as And calculating an affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, and converting U into UR,kCorresponding optimal target quadrilateral meshAffine transformation matrix of Wherein λ is1、λ2、λ3、λ4、λ5Are all weighting parameters, min () is a function taking the minimum value,a set of target quadrilateral meshes corresponding to all quadrilateral meshes in { L (x, y) }, a set of target quadrilateral meshes corresponding to all quadrilateral meshes in { R (x, y) }, represents UL,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,represents UR,kThe corresponding optimal target quadrilateral mesh is then selected,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe 1 st mesh vertex, the 2 nd mesh vertex, the 3 rd mesh vertex, the 4 th mesh vertex,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representation1 st mesh vertex, 2 nd mesh vertex, 3 rd mesh vertex, 4 th mesh vertex of (a)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe inverse of (a) is,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (c),andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (c),andcorresponding representationHorizontal coordinate position and vertical coordinate position of (A)R,k)TIs AR,kTranspose of (A) ((A)R,k)TAR,k)-1Is (A)R,k)TAR,kThe inverse of (a) is,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (c),andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationHorizontal coordinate position and vertical coordinate position of (a);
fourteen steps: according to an affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the (L (x, y) } process the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the (L (x, y) } after the affine transformation matrix transformation, and use the U-shaped matrix to transform the U-shaped matrix into the U-shaped matrixL,kThe position of the middle horizontal coordinate is x'L,kAnd the vertical coordinate position is y'L,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after affine transformation matrix transformation, a left viewpoint image after zooming is obtained and recorded as a left viewpoint imageWherein x 'is more than or equal to 1'L,k≤W,1≤y'L,k≤H,X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the zoomed stereo image, H is also the height of the zoomed stereo image,representThe pixel value of the pixel point with the middle coordinate position (x ', y');
similarly, according to the affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after the affine transformation matrix transformation, and converting U into UR,kThe position of the middle horizontal coordinate is x'R,kAnd the vertical coordinate position is y'R,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after affine transformation matrix transformation, a zoomed right viewpoint image is obtained and recorded asWherein x 'is more than or equal to 1'R,k≤W,1≤y'R,k≤H,1≤x'≤W',1≤y'≤H,To representThe pixel value of the pixel point with the middle coordinate position (x ', y');
a zoomed stereoscopic image is composed of the zoomed left viewpoint image and the zoomed right viewpoint image.
Compared with the prior art, the invention has the advantages that:
the method comprises the steps of firstly obtaining an expected grid of each quadrilateral grid in a left viewpoint image and a right viewpoint image of a three-dimensional image, then extracting coordinate offset energy of target quadrilateral grids corresponding to all quadrilateral grids falling in an object selected by a user, coordinate offset energy of target quadrilateral grids corresponding to all quadrilateral grids falling in an object boundary selected by the user, background holding energy of target quadrilateral grids corresponding to all quadrilateral grids falling in a region, size control energy of target quadrilateral grids corresponding to all quadrilateral grids falling in the object selected by the user, left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling in the object selected by the user, and optimizing to minimize the total energy to further obtain an optimal target corresponding to each quadrilateral grid in the left viewpoint image and the right viewpoint image of the three-dimensional image The method comprises the steps of obtaining a zoomed left viewpoint image and a zoomed right viewpoint image according to an affine transformation matrix of each optimal target quadrilateral grid, so that the zoomed three-dimensional image can keep an accurate object shape and an accurate target focusing depth, the zoomed three-dimensional image has an immersion feeling of close-distance viewing and a higher depth feeling, and higher three-dimensional experience quality can be obtained.
Drawings
FIG. 1 is a block diagram of a general implementation of the method of the present invention;
FIG. 2a is a left viewpoint Image of an original stereoscopic Image of "Image 1";
FIG. 2b is a right viewpoint Image of the original stereoscopic Image of "Image 1";
fig. 2c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 27.25 mm;
fig. 2d is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 27.25 mm;
fig. 2e is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 27.99 mm;
fig. 2f is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 27.99 mm;
fig. 2g is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 28.74 mm;
fig. 2h is a right viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 1";
FIG. 3a is a left viewpoint Image of an original stereoscopic Image of "Image 2";
FIG. 3b is a right viewpoint Image of the original stereoscopic Image of "Image 2";
fig. 3c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 2" being 27.25 mm;
fig. 3d is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 2" being 27.25 mm;
fig. 3e is a left viewpoint Image of the zoomed stereoscopic Image of 27.99 mm in focal length f of "Image 2";
fig. 3f is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 2" being 27.99 mm;
fig. 3g is a left viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 2";
fig. 3h is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 2" being 28.74 mm;
FIG. 4a is a left viewpoint Image of an original stereoscopic Image of "Image 3";
FIG. 4b is a right viewpoint Image of an original stereoscopic Image of "Image 3";
fig. 4c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 3" being 27.25 mm;
fig. 4d is a right viewpoint Image of the zoomed stereoscopic Image of 27.25 mm with focal length f of "Image 3";
fig. 4e is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 3" being 27.99 mm;
fig. 4f is a right viewpoint Image of the zoomed stereoscopic Image of 27.99 mm with the focal length f of "Image 3";
fig. 4g is a left viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 3";
fig. 4h is a right viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 3";
FIG. 5a is a left viewpoint Image of an original stereoscopic Image of "Image 4";
FIG. 5b is a right viewpoint Image of an original stereoscopic Image of "Image 4";
fig. 5c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 4" being 27.25 mm;
fig. 5d is a right viewpoint Image of the zoomed stereoscopic Image of 27.25 mm with focal length f of "Image 4";
fig. 5e is a left viewpoint Image of the zoomed stereoscopic Image of 27.99 mm in focal length f of "Image 4";
fig. 5f is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 4" being 27.99 mm;
fig. 5g is a left viewpoint Image of the zoomed stereoscopic Image with a focal length f of "Image 4" of 28.74 mm;
fig. 5h is a right viewpoint Image of the zoomed stereoscopic Image with a focal length f of "Image 4" of 28.74 mm;
FIG. 6a is a left viewpoint Image of an original stereoscopic Image of "Image 5";
FIG. 6b is a right viewpoint Image of an original stereoscopic Image of "Image 5";
fig. 6c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 5" being 27.25 mm;
fig. 6d is a right viewpoint Image of the zoomed stereoscopic Image of 27.25 mm with focal length f of "Image 5";
fig. 6e is a left viewpoint Image of the zoomed stereoscopic Image of 27.99 mm in focal length f of "Image 5";
fig. 6f is a right viewpoint Image of the zoomed stereoscopic Image of 27.99 mm with the focal length f of "Image 5";
fig. 6g is a left viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 5";
fig. 6h shows a right viewpoint Image of a zoomed stereoscopic Image having a focal length f of 28.74 mm, which is "Image 5".
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The general implementation block diagram of the stereo image zooming method provided by the invention is shown in fig. 1, and the method comprises the following steps:
the method comprises the following steps: left, right, and left parallax images of a stereoscopic image of width W and height H to be processed are correspondingly denoted as { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein W and H can be evenly divided by 2, x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, and L (x, y) represents { L (x, y) } middle seatThe pixel value of a pixel with a (x, y) coordinate position, { R (x, y) } represents the pixel value of a pixel with a (x, y) coordinate position in { R (x, y) }, and dL(x, y) represents { d }LThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) }.
Step two: adopting the existing SIFT-Flow method to establish the matching relationship between { L (x, y) } and { R (x, y) }, obtaining SIFT-Flow vectors of each pixel point in the { L (x, y) }, and marking the SIFT-Flow vectors of the pixel points with the coordinate positions (x, y) in the { L (x, y) }asvL(x,y),Wherein,for the purpose of indicating the direction of the horizon,for the purpose of indicating the vertical direction of the,denotes vL(x, y) a horizontal offset amount,denotes vL(x, y) vertical offset.
Step three: let the coordinate position of the image principal point of { L (x, y) } be noted asLet the coordinate position of the image principal point of { R (x, y) } be noted asThen according to the coordinate position in { L (x, y) } isThe pixel point of (1) is the SIFT-Flow vector of the image principal point of { L (x, y) }, and the coordinate position in the { L (x, y) } is determined to beThe pixel point of (A) is the pixel point matched with the image principal point of (L (x, y) } in (R (x, y) }, and the coordinate position of the matched pixel point is recorded asAnd calculates the vertical deviation of { L (x, y) } and { R (x, y) }, denoted as b,wherein,denotes a coordinate position of { L (x, y) } in L (x, y) } isIs the SIFT-Flow vector of the image principal point of { L (x, y) }The amount of horizontal offset of (a),denotes a coordinate position of { L (x, y) } asThe pixel point of (1) is the SIFT-Flow vector of the image principal point of { L (x, y) }Is offset vertically.
Step four: let the focal lengths of { L (x, y) } and { R (x, y) } be f0Let the object distances of { L (x, y) } and { R (x, y) } be noted asThen, according to the focal length f specified by the user, the magnification factor of { L (x, y) } and { R (x, y) }, denoted as a,where θ is the focus specified by the userDistance f and object distances of { L (x, y) } and { R (x, y) }The determined image distance is determined based on the image distance,θ0is a focal length f of { L (x, y) } and { R (x, y) }0And object distances of { L (x, y) } and { R (x, y) }The determined image distance is determined based on the image distance,in this example take f025.00 mm of the total weight of the alloy,mm,. theta.0The user adjusts the image distance by changing the focal lengths of L (x, y) and R (x, y), thereby achieving the purpose of image zooming.
Step five: dividing { L (x, y) } into M quadrilateral grids which do not overlap with each other and have the size of 22 multiplied by 22, and marking the kth quadrilateral grid in { L (x, y) } as UL,k(ii) a Then according to all quadrilateral meshes in { L (x, y) } and { dL(x, y) }, acquiring all non-overlapping quadrilateral grids with the size of 22 multiplied by 22 in the { R (x, y) }, and marking the kth quadrilateral grid in the { R (x, y) }asUR,k(ii) a Wherein,(symbol)is a sign of a down rounding operation, k is a positive integer, k is more than or equal to 1 and less than or equal to M, UL,kDescribed by its set of 4 mesh vertices above left, below left, above right and below right, corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,UR,kdescribed by its set of 4 mesh vertices above left, below left, above right and below right, corresponds to and represents UR,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way,represents { d }LThe (x, y) } coordinate position isThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, represents { dLThe (x, y) } coordinate position isThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, represents { dL(x, y) } coordinate position ofThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, represents { d }L(x, y) } coordinate position ofThe pixel value of the pixel point.
Step six: calculating a desired grid for each quadrilateral grid in { L (x, y) } according to a magnification a of { L (x, y) } and { R (x, y) } and a vertical deviation b of { L (x, y) } and { R (x, y) }, calculating a desired grid for each quadrilateral grid in { L (x, y) }, and applying U to the desired gridL,kIs marked asSimilarly, a desired grid of each quadrangular grid in { R (x, y) } is calculated from the magnification a of { L (x, y) } and { R (x, y) } and the vertical deviation b of { L (x, y) } and { R (x, y) }, U is calculatedR,kIs marked asWherein,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective vertex of the desired mesh is,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective vertices of the desired mesh are,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,
step seven: each quadrilateral mesh in { L (x, y) } corresponds to a target quadrilateral mesh, and U is addedL,kThe corresponding target quadrilateral mesh is notedSimilarly, each quadrilateral mesh in { R (x, y) } corresponds to a target quadrilateral mesh, and U is added to the target quadrilateral meshR,kThe corresponding target quadrilateral mesh is notedWherein,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective target mesh vertex is set to be,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective corresponding target mesh vertices are,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd sit uprightTarget positionTo be described, the method has the advantages that,
step eight: a user manually selects an object in a to-be-processed stereo image through editing operation; then, according to the desired grids of all quadrilateral grids in { L (x, y) } and { R (x, y) } falling in the object selected by the user, the coordinate offset energy of the target quadrilateral grid corresponding to all quadrilateral grids in { L (x, y) } and { R (x, y) } falling in the object selected by the user is calculated, and is recorded as Eobject,Wherein the symbol "| | |" is a euclidean distance-solving symbol, t is a positive integer, t is 1,2,3,4,representThe t-th mesh vertex of (2),a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the user-selected object in L (x, y),to representThe t-th mesh vertex of (2),to representThe t-th mesh vertex of (2),a set of mesh vertices representing target quadrilateral meshes of R (x, y) corresponding to all quadrilateral meshes falling within the user-selected object,to representThe t-th mesh vertex of (1).
Step nine: according to the expected grids of all quadrilateral grids in the { L (x, y) } and the { R (x, y) } which fall on the boundary of the object selected by the user, the coordinate offset energy of the target quadrilateral grid corresponding to all quadrilateral grids in the { L (x, y) } and the { R (x, y) } which fall on the boundary of the object selected by the user is calculated and recorded as Eedge,Wherein,a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes that fall within the user-selected object boundary in { L (x, y) },a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes of { R (x, y) } that fall within the user-selected object boundary.
Step ten: calculating the background holding energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background area according to the expected meshes of all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background areaback,Wherein, the background area is a pair selected by the user in the stereo image to be processedSuch as the area outside the area of the area,a set of mesh vertices representing target quadrangular meshes corresponding to all quadrangular meshes falling within the background region in { L (x, y) },and (2) a set of mesh vertices representing the target quadrilateral meshes corresponding to all quadrilateral meshes falling within the background region in the { R (x, y) }.
Step eleven: calculating size control energy of target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the size control energy as Eimport,Wherein, denotes a mesh vertex of j-th in the horizontal direction and i-th in the vertical direction in { L (x, y) },denotes a mesh vertex of j +1 th in the horizontal direction and i-th in the vertical direction in { L (x, y) },to representThe corresponding vertex of the target mesh is set,representCorrespond toThe target mesh vertices of (2) are,denotes a mesh vertex of jth in the horizontal direction and ith in the vertical direction in { R (x, y) },denotes a mesh vertex of (j + 1) th in the horizontal direction and (i) th in the vertical direction among { R (x, y) },representThe corresponding target mesh vertex is set to be,to representThe corresponding target mesh vertex, s, represents the scaling factor specified by the user, and in this embodiment, s is equal to 1, that is, the original size of the important content is maintained.
Step twelve: calculating left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the left-right consistency energy as Edepth, Wherein,representThe position of the horizontal coordinate of (a),to representThe position of the horizontal coordinate of (a),representThe position of the vertical coordinate of (a),to representIs measured in the vertical coordinate position of the optical system,represents { dL(x, y) } coordinate position ofThe pixel value of the pixel point of (a),represents UR,kT mesh vertex ofThe position of the horizontal coordinate of (a),represents UR,kT mesh vertex ofE denotes the horizontal baseline distance between the left viewpoint and the right viewpoint of the stereoscopic image to be processed, and in this embodiment, e is 176.252 mm.
Step thirteen: according to Eobject、Eedge、Eback、EimportAnd EdepthCalculating the total energy of the target quadrilateral grids corresponding to all the quadrilateral grids in the { L (x, y) } and the { R (x, y) }, and recording the total energy as Etotal,Etotal=λ1Eobject+λ2Eedge+λ3Eback+λ4Eimport+λ5Edepth(ii) a Then solving by least squares optimizationObtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) }, and recording the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }asthe optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) } correspondinglyAnd then, an affine transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) } is calculated, and U is calculatedL,kCorresponding optimal target quadrilateral meshAffine transformation matrix of And calculating an affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, and converting U into UR,kCorresponding optimal target quadrilateral meshAffine transformation matrix of Wherein λ is1、λ2、λ3、λ4、λ5Are all weighted parameters, in this example, take λ1=3、λ1=4、λ3=2、λ4=4、λ5Min () is the minimum function taken as 1,represents a set of target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) }, a set of target quadrilateral meshes corresponding to all quadrilateral meshes in { R (x, y) }, represents UL,kThe corresponding optimal target quadrilateral mesh is then selected,represents UR,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,through its upper left, lower left, upper right and lower right 4 grid topsA collection of points to describe the set of points, corresponding representation1 st mesh vertex, 2 nd mesh vertex, 3 rd mesh vertex, 4 th mesh vertex,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representation1 st mesh vertex, 2 nd mesh vertex, 3 rd mesh vertex, 4 th mesh vertex of (a)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe reverse of (c) is true,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (c),andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representation(ii) a horizontal coordinate position and a vertical coordinate position of (A)R,k)TIs AR,kTranspose of (A) ((A)R,k)TAR,k)-1Is (A)R,k)TAR,kThe reverse of (c) is true,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (c),andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (c),andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (c),andcorresponding representationA horizontal coordinate position and a vertical coordinate position.
Fourteen steps: according to the affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { L (x, y) }, calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after the affine transformation matrix transformation, and enabling U to be connected with the U through the UL,kThe position of the middle horizontal coordinate is x'L,kAnd the vertical coordinate position is y'L,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after affine transformation matrix transformation, a left viewpoint image after zooming is obtained and recorded as a left viewpoint imageWherein x 'is more than or equal to 1'L,k≤W,1≤y'L,k≤H,X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the zoomed stereo image, H is the height of the zoomed stereo image,representThe pixel value of the pixel point with the middle coordinate position of (x ', y').
Similarly, according to the affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after the affine transformation matrix transformation, and converting U into UR,kThe position of the middle horizontal coordinate is x'R,kAnd the vertical coordinate position is y'R,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after affine transformation matrix transformation, a zoomed right viewpoint image is obtained and recorded asWherein x 'is more than or equal to 1'R,k≤W,1≤y'R,k≤H,1≤x'≤W',1≤y'≤H,To representThe pixel value of the pixel point with the middle coordinate position of (x ', y').
A zoomed stereoscopic image is composed of the zoomed left viewpoint image and the zoomed right viewpoint image.
To further illustrate the feasibility and effectiveness of the method of the present invention, the method of the present invention was tested.
Zoom experiments were performed on five stereoscopic images, Image1, Image2, Image3, Image4, and Image5, using the method of the present invention. Fig. 2a shows a left viewpoint Image of an original stereoscopic Image of "Image 1", fig. 2b shows a right viewpoint Image of an original stereoscopic Image of "Image 1", fig. 2c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 27.25 mm, fig. 2d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 27.25 mm, fig. 2e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 27.99 mm, fig. 2f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 27.99 mm, fig. 2g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 28.74 mm, fig. 2h shows a right viewpoint Image of "Image 1" with a focal length f of 28.74 mm; fig. 3a shows a left viewpoint Image of an original stereoscopic Image of "Image 2", fig. 3b shows a right viewpoint Image of an original stereoscopic Image of "Image 2", fig. 3c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 3d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 3d shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 3e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 3f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 8627.99 mm", fig. 3f shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 27.99 mm, fig. 3g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", and fig. 3h shows a right viewpoint Image of "Image 2", fig. 3h shows a zoomed stereoscopic Image of "Image 2", and fig. 74; fig. 4a shows a left viewpoint Image of an original stereoscopic Image of "Image 3", fig. 4b shows a right viewpoint Image of an original stereoscopic Image of "Image 3", fig. 4c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 4d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 4d shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 4e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 4f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 8627.99 mm", fig. 4f shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 27.99 mm, fig. 4g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 28.74, and fig. 4h shows a right viewpoint Image of "Image 3", fig. 3674; fig. 5a shows a left viewpoint Image of an original stereoscopic Image of "Image 4", fig. 5b shows a right viewpoint Image of an original stereoscopic Image of "Image 4", fig. 5c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 27.25 mm, fig. 5d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 27.25 mm, fig. 5e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 27.99 mm, fig. 5f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 27.99 mm, fig. 5g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 28.74, fig. 5h shows a right viewpoint Image of "Image 4" Image of "Image 3528.74 mm; fig. 6a shows a left viewpoint Image of an original stereoscopic Image of "Image 5", fig. 6b shows a right viewpoint Image of an original stereoscopic Image of "Image 5", fig. 6c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 27.25 mm, fig. 6d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 27.25 mm, fig. 6e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 27.99 mm, fig. 6f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 27.99 mm, fig. 6g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 28.74 mm, and fig. 6h shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 28.74 mm. As can be seen from fig. 2a to 6h, the zoomed stereoscopic image obtained by the method of the present invention can better preserve the object shape, and the size of the important object can be increased according to the selection of the user.
Claims (1)
1. A stereoscopic image zooming method characterized by comprising the steps of:
the method comprises the following steps: the left viewpoint image, the right viewpoint image, and the left parallax image of the stereoscopic image having the width W and the height H to be processed are correspondingly denoted as { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein, W and H can be divided by 2, x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, L (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { L (x, y) }, R (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { R (x, y) }, dL(x, y) represents { d }LThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
step two: establishing a matching relation between { L (x, y) } and { R (x, y) } by adopting an SIFT-Flow method to obtain an SIFT-Flow vector of each pixel point in the { L (x, y) }, and marking the SIFT-Flow vector of the pixel point with the coordinate position of (x, y) in the { L (x, y) }asvL(x,y),Wherein,for the purpose of indicating the horizontal direction,for the purpose of indicating the vertical direction,denotes vL(x, y) a horizontal offset amount,denotes vLA vertical offset of (x, y);
step three: let the coordinate position of the image principal point of { L (x, y) } be noted asLet the coordinate position of the image principal point of { R (x, y) } be noted as Then according to the coordinate position in { L (x, y) } asThe pixel point of (1) is the SIFT-Flow vector of the image principal point of (L (x, y) }, and the coordinate position in (L (x, y) } is determined to beThe pixel point of (A) is the pixel point matched with the image principal point of (L (x, y) } in (R (x, y) }, and the coordinate position of the matched pixel point is recorded asAnd calculates the vertical deviation of { L (x, y) } and { R (x, y) }, denoted as b,wherein,denotes a coordinate position of { L (x, y) } in L (x, y) } isIs the SIFT-Flow vector of the image principal point of { L (x, y) }The amount of horizontal offset of (a),denotes a coordinate position of { L (x, y) } asThe pixel point of (1) is the SIFT-Flow vector of the image principal point of { L (x, y) }A vertical offset of (d);
step four: let the focal lengths of { L (x, y) } and { R (x, y) } be f0Object distances of { L (x, y) } and { R (x, y) } are denoted asThen, according to the focal length f specified by the user, the magnification of { L (x, y) } and { R (x, y) } is calculated, denoted as a,where θ is the focal length f specified by the user and the object distance of { L (x, y) } and { R (x, y) }The determined image distance is determined based on the image distance,θ0is a focal length f of { L (x, y) } and { R (x, y) }0And object distances of { L (x, y) } and { R (x, y) }The determined image distance is determined based on the image distance,
step five: dividing { L (x, y) } into M quadrilateral grids with the size of 22 × 22 and not overlapping with each other, and marking the kth quadrilateral grid in { L (x, y) } as UL,k(ii) a Then according to the sum of all quadrilateral meshes in { L (x, y) } and { d }L(x, y) }, acquiring all non-overlapping quadrilateral grids with the size of 22 multiplied by 22 in the { R (x, y) }, and marking the kth quadrilateral grid in the { R (x, y) }asUR,k(ii) a Wherein,(symbol)is a sign of a down rounding operation, k is a positive integer, k is more than or equal to 1 and less than or equal to M, UL,kDescribed by its set of 4 mesh vertices above left, below left, above right and below right, corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way,UR,kdescribed by its set of 4 mesh vertices above left, below left, above right and below right, corresponds to and represents UR,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that,represents { dLThe (x, y) } coordinate position isThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, represents { d }LThe (x, y) } coordinate position isThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position ofAnd vertical coordinate positionCome and drawIn the above-mentioned manner, represents { dL(x, y) } coordinate position ofThe pixel value of the pixel point of (a),to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, represents { dLThe (x, y) } coordinate position isThe pixel value of the pixel point of (1);
step six: the magnification a according to { L (x, y) } and { R (x, y) } and { L (x,y) and the vertical deviation b of { R (x, y) }, calculating a desired grid for each quadrilateral grid in { L (x, y) }, and calculating U from the desired gridL,kIs marked asSimilarly, a desired grid of each quadrangular grid in { R (x, y) } is calculated from the magnification a of { L (x, y) } and { R (x, y) } and the vertical deviation b of { L (x, y) } and { R (x, y) }, U is calculatedR,kIs marked asWherein,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective vertex of the desired mesh is,to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective vertex of the desired mesh is,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way,
step seven: each quadrilateral mesh in { L (x, y) } corresponds to a target quadrilateral mesh, and U is addedL,kThe corresponding target quadrilateral mesh is notedSimilarly, each quadrilateral mesh in { R (x, y) } corresponds to a target quadrilateral mesh, and U is addedR,kThe corresponding target quadrilateral mesh is notedWherein,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective target mesh vertex is positioned at the edge of the mesh,to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the correspondingThe respective corresponding target mesh vertices of the mesh,to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo be described, the method has the advantages that, to be provided withHorizontal coordinate position ofAnd vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way, to be provided withHorizontal coordinate position of (2)And vertical coordinate positionTo describe the above-mentioned components in a certain way,
step eight: a user manually selects an object in a to-be-processed stereo image through editing operation; then according to { L (x) }Y) and { R (x, y) } the desired grid of all quadrilateral grids falling within the user-selected object, calculating a coordinate offset energy, denoted E, of the target quadrilateral grid corresponding to all quadrilateral grids falling within the user-selected object in { L (x, y) } and { R (x, y) }object,Wherein the symbol "| | |" is a euclidean distance-solving symbol, t is a positive integer, t is 1,2,3,4,representThe t-th mesh vertex of (2),a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the user-selected object in L (x, y),to representThe t-th mesh vertex of (2),representThe t-th mesh vertex of (2),a set of mesh vertices representing target quadrilateral meshes of { R (x, y) } corresponding to all quadrilateral meshes falling within the user-selected object,representThe t-th mesh vertex of (1);
step nine: calculating coordinate offset energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes of { L (x, y) } and { R (x, y) } falling at the user-selected object boundary according to the expected meshes of all quadrilateral meshes of { L (x, y) } and { R (x, y) } falling at the user-selected object boundary, wherein the coordinate offset energy is denoted as Eedge,Wherein,a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes that fall within the user-selected object boundary in { L (x, y) },a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes that fall within the user-selected object boundary in { R (x, y) };
step ten: calculating the background holding energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background area according to the expected meshes of all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background area, wherein the expected meshes are the target quadrilateral meshes in the { L (x, y) } and the target quadrilateral meshes in the { R (x, y) } are the target quadrilateral meshes in the background area, and the background holding energy is denoted as Eback,Wherein the background area is an area except the area where the object selected by the user is located in the to-be-processed stereo image,representing all quadrilateral nets in { L (x, y) } that fall within the background areaA set of mesh vertices of the target quadrilateral mesh to which the lattice corresponds,a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the background region in { R (x, y) };
step eleven: calculating the size control energy of the target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the size control energy as Eimport,Wherein, denotes a mesh vertex of j (th) in the horizontal direction and i (th) in the vertical direction among { L (x, y) },denotes a mesh vertex of j +1 th in the horizontal direction and i-th in the vertical direction among { L (x, y) },representThe corresponding target mesh vertex is set to be,to representThe corresponding vertex of the target mesh is set,denotes a mesh vertex of jth in the horizontal direction and ith in the vertical direction among { R (x, y) },denotes a mesh vertex of (j + 1) th in the horizontal direction and (i) th in the vertical direction among { R (x, y) },representThe corresponding target mesh vertex is set to be,representCorresponding target mesh vertices, s represents a user-specified scaling factor;
step twelve: calculating left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the left-right consistency energy as Edepth, Wherein,to representThe position of the horizontal coordinate of (a),to representThe position of the horizontal coordinate of (a),to representIs measured in the vertical coordinate position of the optical system,to representThe position of the vertical coordinate of (a),represents { dL(x, y) } coordinate position ofThe pixel value of the pixel point of (a),represents UR,kT-th mesh vertex of (1)The position of the horizontal coordinate of (a),represents UR,kT-th mesh vertex of (1)E represents the horizontal baseline distance between the left viewpoint and the right viewpoint of the stereoscopic image to be processed;
step thirteen: according toEobject、Eedge、Eback、EimportAnd EdepthThe total energy of the target quadrilateral meshes corresponding to all the quadrilateral meshes in { L (x, y) } and { R (x, y) } is calculated and recorded as Etotal,Etotal=λ1Eobject+λ2Eedge+λ3Eback+λ4Eimport+λ5Edepth(ii) a Then solving by least squares optimizationObtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) }, and recording the correspondence asAnd then, an affine transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) } is calculated, and U is calculatedL,kCorresponding optimal target quadrilateral meshAffine transformation matrix of And calculates the corresponding of each quadrilateral grid in { R (x, y) }Affine transformation matrix of the optimal target quadrilateral mesh, transforming UR,kCorresponding optimal target quadrilateral meshAffine transformation matrix of Wherein λ is1、λ2、λ3、λ4、λ5All are weighting parameters, min () is a minimum function,a set of target quadrilateral meshes corresponding to all quadrilateral meshes in { L (x, y) }, represents a set of target quadrilateral meshes corresponding to all quadrilateral meshes in R (x, y), represents UL,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,represents UR,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe 1 st mesh vertex, the 2 nd mesh vertex, the 3 rd mesh vertex, the 4 th mesh vertex,described by its set of 4 mesh vertices above left, below left, above right and below right, corresponding representationThe 1 st mesh vertex, the 2 nd mesh vertex, the 3 rd mesh vertex, and the 4 th mesh vertex of (A)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe reverse of (c) is true,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationHorizontal coordinate position and vertical coordinate position of (A)R,k)TIs AR,kTranspose of (A) ((A)R,k)TAR,k)-1Is (A)R,k)TAR,kThe inverse of (a) is,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (c),andcorresponding representationA horizontal coordinate position and a vertical coordinate position of,andcorresponding representationA horizontal coordinate position and a vertical coordinate position of (a);
fourteen steps: according to the affine transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) },calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the (L (x, y) } after affine transformation matrix transformation, and converting U into UL,kThe position of the middle horizontal coordinate is x'L,kAnd the vertical coordinate position is y'L,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after affine transformation matrix transformation, obtaining a left viewpoint image after zooming, and recording the left viewpoint image as a left viewpoint image after zoomingWherein x is not less than 1'L,k≤W,1≤y'L,k≤H,X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the zoomed stereo image, H is also the height of the zoomed stereo image,to representThe pixel value of the pixel point with the middle coordinate position (x ', y');
similarly, according to the affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after the affine transformation matrix transformationPosition of UR,kThe position of the middle horizontal coordinate is x'R,kAnd the vertical coordinate position is y'R,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded asAnd then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after affine transformation matrix transformation, obtaining a zoomed right viewpoint image, and recording the zoomed right viewpoint image as the zoomed right viewpoint imageWherein x 'is more than or equal to 1'R,k≤W,1≤y'R,k≤H,1≤x'≤W',1≤y'≤H,RepresentThe pixel value of the pixel point with the middle coordinate position of (x ', y');
a zoomed stereoscopic image is composed of the zoomed left viewpoint image and the zoomed right viewpoint image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011417735.6A CN112702590B (en) | 2020-12-07 | 2020-12-07 | Three-dimensional image zooming method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011417735.6A CN112702590B (en) | 2020-12-07 | 2020-12-07 | Three-dimensional image zooming method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112702590A CN112702590A (en) | 2021-04-23 |
CN112702590B true CN112702590B (en) | 2022-07-22 |
Family
ID=75506334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011417735.6A Active CN112702590B (en) | 2020-12-07 | 2020-12-07 | Three-dimensional image zooming method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112702590B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002054913A (en) * | 2000-08-11 | 2002-02-20 | Minolta Co Ltd | Three-dimensional data generating system and projector |
CN103270760A (en) * | 2010-12-23 | 2013-08-28 | 美泰有限公司 | Method and system for disparity adjustment during stereoscopic zoom |
CN103955960A (en) * | 2014-03-21 | 2014-07-30 | 南京大学 | Image viewpoint transformation method based on single input image |
CN104301704A (en) * | 2013-07-17 | 2015-01-21 | 宏达国际电子股份有限公司 | Content-aware display adaptation methods |
CN107945151A (en) * | 2017-10-26 | 2018-04-20 | 宁波大学 | A kind of reorientation image quality evaluating method based on similarity transformation |
CN108810512A (en) * | 2018-04-24 | 2018-11-13 | 宁波大学 | A kind of object-based stereo-picture depth method of adjustment |
CN109413404A (en) * | 2018-09-06 | 2019-03-01 | 宁波大学 | A kind of stereo-picture Zooming method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10679370B2 (en) * | 2015-02-13 | 2020-06-09 | Carnegie Mellon University | Energy optimized imaging system with 360 degree field-of-view |
-
2020
- 2020-12-07 CN CN202011417735.6A patent/CN112702590B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002054913A (en) * | 2000-08-11 | 2002-02-20 | Minolta Co Ltd | Three-dimensional data generating system and projector |
CN103270760A (en) * | 2010-12-23 | 2013-08-28 | 美泰有限公司 | Method and system for disparity adjustment during stereoscopic zoom |
CN104301704A (en) * | 2013-07-17 | 2015-01-21 | 宏达国际电子股份有限公司 | Content-aware display adaptation methods |
CN103955960A (en) * | 2014-03-21 | 2014-07-30 | 南京大学 | Image viewpoint transformation method based on single input image |
CN107945151A (en) * | 2017-10-26 | 2018-04-20 | 宁波大学 | A kind of reorientation image quality evaluating method based on similarity transformation |
CN108810512A (en) * | 2018-04-24 | 2018-11-13 | 宁波大学 | A kind of object-based stereo-picture depth method of adjustment |
CN109413404A (en) * | 2018-09-06 | 2019-03-01 | 宁波大学 | A kind of stereo-picture Zooming method |
Non-Patent Citations (2)
Title |
---|
An Energy-Constrained Video Retargeting Approach for Color-Plus-Depth 3D Video;Feng Shao;《Journal of Display Technology》;20151219;全文 * |
基于网格形变的立体图像内容重组;柴雄力;《图像处理和编码》;20190430;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112702590A (en) | 2021-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102741879B (en) | Method for generating depth maps from monocular images and systems using the same | |
JP5068391B2 (en) | Image processing device | |
WO2013005365A1 (en) | Image processing apparatus, image processing method, program, and integrated circuit | |
US20130063571A1 (en) | Image processing apparatus and image processing method | |
US10778955B2 (en) | Methods for controlling scene, camera and viewing parameters for altering perception of 3D imagery | |
RU2690757C1 (en) | System for synthesis of intermediate types of light field and method of its operation | |
CN102098528B (en) | Method and device for converting planar image into stereoscopic image | |
JP6610535B2 (en) | Image processing apparatus and image processing method | |
CN105721768A (en) | Method and apparatus for generating adapted slice image from focal stack | |
CN104093013A (en) | Method for automatically regulating image parallax in stereoscopic vision three-dimensional visualization system | |
Park et al. | Efficient viewer-centric depth adjustment based on virtual fronto-parallel planar projection in stereo 3D images | |
CN110958442B (en) | Method and apparatus for processing holographic image data | |
US10506177B2 (en) | Image processing device, image processing method, image processing program, image capture device, and image display device | |
CN109600667A (en) | A method of the video based on grid and frame grouping redirects | |
CN108810512B (en) | A kind of object-based stereo-picture depth method of adjustment | |
CN112702590B (en) | Three-dimensional image zooming method | |
CN108307170B (en) | A kind of stereo-picture method for relocating | |
JP2017143354A (en) | Image processing apparatus and image processing method | |
CN109413404B (en) | A kind of stereo-picture Zooming method | |
CN112449170B (en) | Stereo video repositioning method | |
CN106910253B (en) | Stereo image cloning method based on different camera distances | |
CN113240573B (en) | High-resolution image style transformation method and system for local and global parallel learning | |
CN114092316A (en) | Image processing method, apparatus and storage medium | |
CN108833876B (en) | A kind of stereoscopic image content recombination method | |
Yan et al. | Stereoscopic image generation from light field with disparity scaling and super-resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |