CN112702590B - Three-dimensional image zooming method - Google Patents

Three-dimensional image zooming method Download PDF

Info

Publication number
CN112702590B
CN112702590B CN202011417735.6A CN202011417735A CN112702590B CN 112702590 B CN112702590 B CN 112702590B CN 202011417735 A CN202011417735 A CN 202011417735A CN 112702590 B CN112702590 B CN 112702590B
Authority
CN
China
Prior art keywords
coordinate position
quadrilateral
mesh
vertex
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011417735.6A
Other languages
Chinese (zh)
Other versions
CN112702590A (en
Inventor
邵枫
周莘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202011417735.6A priority Critical patent/CN112702590B/en
Publication of CN112702590A publication Critical patent/CN112702590A/en
Application granted granted Critical
Publication of CN112702590B publication Critical patent/CN112702590B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional image zooming method, which extracts coordinate offset energy of target quadrilateral grids corresponding to all quadrilateral grids falling on a boundary of an object selected by a user, background holding energy of target quadrilateral grids corresponding to all quadrilateral grids falling in a background area, size control energy and left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling in the object selected by the user, minimizes the total energy through optimization, further obtains an optimal target quadrilateral grid corresponding to each quadrilateral grid in left and right viewpoint images, and obtains a zoomed three-dimensional image according to an affine transformation matrix of each optimal target quadrilateral grid; the zoom stereoscopic image has the advantages that the zoom stereoscopic image retains the accurate object shape and the accurate target focusing depth, has the immersion feeling of close-distance watching, has higher depth feeling and can obtain higher three-dimensional experience quality.

Description

Stereo image zooming method
Technical Field
The present invention relates to a method for processing an image signal, and more particularly, to a method for zooming a stereoscopic image.
Background
With the rapid development of 3D technology, stereoscopic images and stereoscopic videos have been increasingly noticed and favored by people. Especially, with the development of mobile phones, tablets and personal computers, the display of the mobile terminal is more and more popular with users. However, when a stereoscopic image and a stereoscopic video are displayed on the mobile terminal screen, the stereoscopic sensation may be reduced or even disappear, and a movie manufacturer attempts to increase the stereoscopic sensation of a specific object by adjusting the size and depth of the specific object to focus the viewer on the specific object. Therefore, for displaying a stereoscopic image and a stereoscopic video on a mobile terminal screen, the attention and the sense of depth of an object can be enhanced by adjusting the depth of focus of a camera.
In adjusting the depth of focus of a stereoscopic image, there are roughly two methods: depth adjustment using the depth map and depth adjustment without the depth map. The former method requires an accurate depth map and generates a depth-adjusted stereoscopic image using a virtual viewpoint rendering technique; the latter method achieves the purpose of depth adjustment directly by moving pixels in a stereoscopic image, however, the method often generates a void or causes object deformation after depth adjustment, and therefore, how to reduce image deformation of the stereoscopic image after depth adjustment and how to control adjustment of the object according to a camera focal length specified by a user to highlight significant content are problems that need to be researched and solved in the process of adjusting the focusing depth of the stereoscopic image.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a stereo image zooming method, wherein a zoomed stereo image has an immersion feeling of close-distance viewing, has a higher depth feeling and can obtain higher three-dimensional experience quality.
The technical scheme adopted by the invention for solving the technical problems is as follows: a stereoscopic image zooming method characterized by comprising the steps of:
the method comprises the following steps: left, right, and left parallax images of a stereoscopic image of width W and height H to be processed are correspondingly denoted as { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein, W and H can be divided by 2, x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, L (x, y) represents the pixel value of the pixel point with (x, y) as the coordinate position in { L (x, y) }, { R (x, y) represents the pixel value of the pixel point with (x, y) as the coordinate position in { R (x, y) }, and dL(x, y) represents { d }LThe pixel value of the pixel point with the coordinate position of (x, y) in (x, y) };
step two: adopting an SIFT-Flow method to establish a matching relation between { L (x, y) } and { R (x, y) }, obtaining an SIFT-Flow vector of each pixel point in the { L (x, y) }, and marking the SIFT-Flow vector of the pixel point with the coordinate position (x, y) in the { L (x, y) }asvL(x,y),
Figure BDA0002820738420000021
Wherein,
Figure BDA0002820738420000022
for the purpose of indicating the horizontal direction,
Figure BDA0002820738420000023
for the purpose of indicating the vertical direction of the,
Figure BDA0002820738420000024
denotes vLThe horizontal offset of (x, y),
Figure BDA0002820738420000025
denotes vL(x, y) vertical offset;
step three: let the coordinate position of the image principal point of { L (x, y) } be noted as
Figure BDA0002820738420000026
Let the coordinate position of the image principal point of { R (x, y) } be noted as
Figure BDA0002820738420000027
Then according to the coordinate position in { L (x, y) } as
Figure BDA0002820738420000028
The pixel point of (1) is the SIFT-Flow vector of the image principal point of (L (x, y) }, and the coordinate position in (L (x, y) } is determined to be
Figure BDA0002820738420000029
The pixel point of (1) is the pixel point matched with the image principal point of { L (x, y) } in { R (x, y) }, and the coordinate position of the matched pixel point is recorded as
Figure BDA00028207384200000210
And calculates the vertical deviation of { L (x, y) } and { R (x, y) }, denoted as b,
Figure BDA00028207384200000211
wherein,
Figure BDA00028207384200000212
denotes a coordinate position of { L (x, y) } in L (x, y) } is
Figure BDA00028207384200000213
Is the SIFT-Flow vector of the image principal point of { L (x, y) }
Figure BDA00028207384200000214
The amount of horizontal offset of (a),
Figure BDA00028207384200000215
denotes a coordinate position of { L (x, y) } as
Figure BDA00028207384200000216
Is the SIFT-Flow vector of the image principal point of { L (x, y) }
Figure BDA00028207384200000217
A vertical offset of (d);
step four: let the focal lengths of { L (x, y) } and { R (x, y) } be f0Object distances of { L (x, y) } and { R (x, y) } are denoted as
Figure BDA00028207384200000218
Then, according to the focal length f specified by the user, the magnification of { L (x, y) } and { R (x, y) } is calculated, denoted as a,
Figure BDA00028207384200000219
where θ is the focal length f specified by the user and the object distance of { L (x, y) } and { R (x, y) }
Figure BDA0002820738420000031
The determined image distance is determined based on the image distance,
Figure BDA0002820738420000032
θ0is a focal length f of { L (x, y) } and { R (x, y) }0And object distances of { L (x, y) } and { R (x, y) }
Figure BDA0002820738420000033
The determined image distance is determined based on the image distance,
Figure BDA0002820738420000034
step five: dividing { L (x, y) } into M quadrilateral grids which do not overlap with each other and have the size of 22 multiplied by 22, and marking the kth quadrilateral grid in { L (x, y) } as UL,k(ii) a Then according to all quadrilateral meshes in { L (x, y) } and { dL(x, y) }, obtaining { R (x, y) }) All the quadrilateral grids 22 × 22 in size, which do not overlap with each other, in (R (x, y) }, the kth quadrilateral grid is marked as UR,k(ii) a Wherein,
Figure BDA0002820738420000035
(symbol)
Figure BDA0002820738420000036
is a rounded-down operation sign, k is a positive integer, k is more than or equal to 1 and less than or equal to M, UL,kDescribed by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA0002820738420000037
Figure BDA0002820738420000038
corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,
Figure BDA0002820738420000039
to be provided with
Figure BDA00028207384200000310
Horizontal coordinate position of
Figure BDA00028207384200000311
And vertical coordinate position
Figure BDA00028207384200000312
To be described, the method has the advantages that,
Figure BDA00028207384200000313
Figure BDA00028207384200000314
to be provided with
Figure BDA00028207384200000315
Level of (2)Coordinate position
Figure BDA00028207384200000316
And vertical coordinate position
Figure BDA00028207384200000317
To be described, the method has the advantages that,
Figure BDA00028207384200000318
Figure BDA00028207384200000319
to be provided with
Figure BDA00028207384200000320
Horizontal coordinate position of
Figure BDA00028207384200000321
And vertical coordinate position
Figure BDA00028207384200000322
To be described, the method has the advantages that,
Figure BDA00028207384200000323
Figure BDA00028207384200000324
to be provided with
Figure BDA00028207384200000325
Horizontal coordinate position of
Figure BDA00028207384200000326
And vertical coordinate position
Figure BDA00028207384200000327
To be described, the method has the advantages that,
Figure BDA00028207384200000328
UR,kdescribed by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200000329
Figure BDA00028207384200000330
corresponds to and represents UR,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,
Figure BDA00028207384200000331
to be provided with
Figure BDA00028207384200000332
Horizontal coordinate position of (2)
Figure BDA00028207384200000333
And vertical coordinate position
Figure BDA00028207384200000334
To be described, the method has the advantages that,
Figure BDA00028207384200000335
represents { dL(x, y) } coordinate position of
Figure BDA00028207384200000336
The pixel value of the pixel point of (a),
Figure BDA00028207384200000337
to be provided with
Figure BDA00028207384200000338
Horizontal coordinate position of (2)
Figure BDA0002820738420000041
And vertical coordinate position
Figure BDA0002820738420000042
To be described, the method has the advantages that,
Figure BDA0002820738420000043
Figure BDA0002820738420000044
represents { dL(x, y) } coordinate position of
Figure BDA0002820738420000045
The pixel value of the pixel point of (a),
Figure BDA0002820738420000046
to be provided with
Figure BDA0002820738420000047
Horizontal coordinate position of (2)
Figure BDA0002820738420000048
And vertical coordinate position
Figure BDA0002820738420000049
To be described, the method has the advantages that,
Figure BDA00028207384200000410
Figure BDA00028207384200000411
represents { dLThe (x, y) } coordinate position is
Figure BDA00028207384200000412
The pixel value of the pixel point of (a),
Figure BDA00028207384200000413
to be provided with
Figure BDA00028207384200000414
Horizontal coordinate position of (2)
Figure BDA00028207384200000415
And vertical coordinate position
Figure BDA00028207384200000416
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200000417
Figure BDA00028207384200000418
represents { d }L(x, y) } coordinate position of
Figure BDA00028207384200000419
The pixel value of the pixel point of (1);
step six: calculating a desired grid for each quadrilateral grid in { L (x, y) } from the magnification a of { L (x, y) } and { R (x, y) } and the vertical deviation b of { L (x, y) } and { R (x, y) }, calculating U for each quadrilateral grid in { L (x, y) }, and calculating U for each quadrilateral grid in { L (x, y) }L,kIs marked as
Figure BDA00028207384200000420
Similarly, a desired grid of each quadrangular grid in { R (x, y) } is calculated from the magnification a of { L (x, y) } and { R (x, y) } and the vertical deviation b of { L (x, y) } and { R (x, y) }, U is calculatedR,kIs marked as
Figure BDA00028207384200000421
Wherein,
Figure BDA00028207384200000422
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200000423
Figure BDA00028207384200000424
corresponding representation
Figure BDA00028207384200000425
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure BDA00028207384200000426
The respective vertices of the desired mesh are,
Figure BDA00028207384200000427
to be provided with
Figure BDA00028207384200000428
Horizontal coordinate position of (2)
Figure BDA00028207384200000429
And vertical coordinate position
Figure BDA00028207384200000430
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200000431
Figure BDA00028207384200000432
to be provided with
Figure BDA00028207384200000433
Horizontal coordinate position of (2)
Figure BDA00028207384200000434
And vertical coordinate position
Figure BDA00028207384200000435
To be described, the method has the advantages that,
Figure BDA00028207384200000436
Figure BDA00028207384200000437
Figure BDA00028207384200000438
to be provided with
Figure BDA00028207384200000439
Horizontal coordinate position of
Figure BDA00028207384200000440
And vertical coordinatePosition of
Figure BDA00028207384200000441
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200000442
Figure BDA00028207384200000443
to be provided with
Figure BDA00028207384200000444
Horizontal coordinate position of
Figure BDA00028207384200000445
And vertical coordinate position
Figure BDA0002820738420000051
To describe the above-mentioned components in a certain way,
Figure BDA0002820738420000052
Figure BDA0002820738420000053
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA0002820738420000054
Figure BDA00028207384200000538
corresponding representation
Figure BDA0002820738420000055
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure BDA0002820738420000056
The respective vertices of the desired mesh are,
Figure BDA0002820738420000057
to be provided with
Figure BDA0002820738420000058
Horizontal coordinate position of (2)
Figure BDA0002820738420000059
And vertical coordinate position
Figure BDA00028207384200000510
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200000511
Figure BDA00028207384200000512
to be provided with
Figure BDA00028207384200000513
Horizontal coordinate position of (2)
Figure BDA00028207384200000514
And vertical coordinate position
Figure BDA00028207384200000515
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200000516
Figure BDA00028207384200000517
to be provided with
Figure BDA00028207384200000518
Horizontal coordinate position of
Figure BDA00028207384200000519
And vertical coordinate position
Figure BDA00028207384200000520
To be described, the method has the advantages that,
Figure BDA00028207384200000521
Figure BDA00028207384200000522
Figure BDA00028207384200000523
to be provided with
Figure BDA00028207384200000524
Horizontal coordinate position of (2)
Figure BDA00028207384200000525
And vertical coordinate position
Figure BDA00028207384200000526
To be described, the method has the advantages that,
Figure BDA00028207384200000527
step seven: each quadrilateral mesh in { L (x, y) } corresponds to a target quadrilateral mesh, and U is addedL,kThe corresponding target quadrilateral mesh is noted
Figure BDA00028207384200000528
Similarly, each quadrilateral mesh in { R (x, y) } corresponds to a target quadrilateral mesh, and U is addedR,kThe corresponding target quadrilateral mesh is noted
Figure BDA00028207384200000529
Wherein,
Figure BDA00028207384200000530
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200000531
Figure BDA00028207384200000532
corresponding representation
Figure BDA00028207384200000533
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure BDA00028207384200000534
The respective target mesh vertex is positioned at the edge of the mesh,
Figure BDA00028207384200000535
to be provided with
Figure BDA00028207384200000536
Horizontal coordinate position of (2)
Figure BDA00028207384200000537
And vertical coordinate position
Figure BDA0002820738420000061
To be described, the method has the advantages that,
Figure BDA0002820738420000062
Figure BDA0002820738420000063
to be provided with
Figure BDA0002820738420000064
Horizontal coordinate position of
Figure BDA0002820738420000065
And vertical coordinate position
Figure BDA0002820738420000066
To be described, the method has the advantages that,
Figure BDA0002820738420000067
Figure BDA0002820738420000068
to be provided with
Figure BDA0002820738420000069
Horizontal coordinate position of
Figure BDA00028207384200000610
And vertical coordinate position
Figure BDA00028207384200000611
To be described, the method has the advantages that,
Figure BDA00028207384200000612
Figure BDA00028207384200000613
to be provided with
Figure BDA00028207384200000614
Horizontal coordinate position of
Figure BDA00028207384200000615
And vertical coordinate position
Figure BDA00028207384200000616
To be described, the method has the advantages that,
Figure BDA00028207384200000617
Figure BDA00028207384200000618
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200000619
Figure BDA00028207384200000620
corresponding representation
Figure BDA00028207384200000621
As the 1 st mesh vertex, as the 2 nd mesh vertex, as the 3 rd mesh vertex, as the 4 th mesh vertexLower right grid vertex, also corresponding to the representation
Figure BDA00028207384200000622
The respective corresponding target mesh vertices are,
Figure BDA00028207384200000623
to be provided with
Figure BDA00028207384200000624
Horizontal coordinate position of (2)
Figure BDA00028207384200000625
And vertical coordinate position
Figure BDA00028207384200000626
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200000627
Figure BDA00028207384200000628
to be provided with
Figure BDA00028207384200000629
Horizontal coordinate position of
Figure BDA00028207384200000630
And vertical coordinate position
Figure BDA00028207384200000631
To be described, the method has the advantages that,
Figure BDA00028207384200000632
Figure BDA00028207384200000633
to be provided with
Figure BDA00028207384200000634
Horizontal coordinate position of (2)
Figure BDA00028207384200000635
And vertical coordinate position
Figure BDA00028207384200000636
To be described, the method has the advantages that,
Figure BDA00028207384200000637
Figure BDA00028207384200000638
to be provided with
Figure BDA00028207384200000639
Horizontal coordinate position of (2)
Figure BDA00028207384200000640
And vertical coordinate position
Figure BDA00028207384200000641
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200000642
step eight: a user manually selects an object in a to-be-processed stereo image through editing operation; then, according to the desired grids of all quadrilateral grids in { L (x, y) } and { R (x, y) } falling in the object selected by the user, the coordinate offset energy of the target quadrilateral grid corresponding to all quadrilateral grids in { L (x, y) } and { R (x, y) } falling in the object selected by the user is calculated, and is recorded as Eobject
Figure BDA00028207384200000643
Wherein the symbol "| | |" is a euclidean distance-solving symbol, t is a positive integer, t is 1,2,3,4,
Figure BDA00028207384200000644
to represent
Figure BDA00028207384200000645
The t-th mesh vertex of (2),
Figure BDA00028207384200000646
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the user-selected object in L (x, y),
Figure BDA00028207384200000647
to represent
Figure BDA00028207384200000648
The t-th mesh vertex of (2),
Figure BDA00028207384200000649
to represent
Figure BDA00028207384200000650
The t-th mesh vertex of (2),
Figure BDA00028207384200000651
a set of mesh vertices representing target quadrilateral meshes of R (x, y) corresponding to all quadrilateral meshes falling within the user-selected object,
Figure BDA00028207384200000652
to represent
Figure BDA00028207384200000653
The t-th mesh vertex of (1);
step nine: calculating coordinate offset energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes of { L (x, y) } and { R (x, y) } falling at the user-selected object boundary according to the expected meshes of all quadrilateral meshes of { L (x, y) } and { R (x, y) } falling at the user-selected object boundary, wherein the coordinate offset energy is denoted as Eedge
Figure BDA0002820738420000071
Wherein,
Figure BDA0002820738420000072
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes of { L (x, y) } that fall within the user-selected object boundary,
Figure BDA0002820738420000073
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes that fall within the user-selected object boundary in { R (x, y) };
step ten: calculating the background holding energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background area according to the expected meshes of all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background areaback
Figure BDA0002820738420000074
Wherein the background area is an area except the area where the object selected by the user is located in the to-be-processed stereo image,
Figure BDA0002820738420000075
a set of mesh vertices representing the target quadrilateral meshes corresponding to all quadrilateral meshes falling within the background region in { L (x, y) },
Figure BDA0002820738420000076
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the background region in { R (x, y) };
step eleven: calculating the size control energy of the target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the size control energy as Eimport
Figure BDA0002820738420000077
Wherein,
Figure BDA0002820738420000078
Figure BDA0002820738420000079
denotes a mesh vertex of j-th in the horizontal direction and i-th in the vertical direction in { L (x, y) },
Figure BDA00028207384200000710
denotes a mesh vertex of j +1 th in the horizontal direction and i-th in the vertical direction in { L (x, y) },
Figure BDA00028207384200000711
to represent
Figure BDA00028207384200000712
The corresponding target mesh vertex is set to be,
Figure BDA00028207384200000713
represent
Figure BDA00028207384200000714
The corresponding vertex of the target mesh is set,
Figure BDA0002820738420000081
denotes a mesh vertex of jth in the horizontal direction and ith in the vertical direction among { R (x, y) },
Figure BDA0002820738420000082
denotes a mesh vertex of (R (x, y) } that is j +1 th in the horizontal direction and i-th in the vertical direction,
Figure BDA0002820738420000083
represent
Figure BDA0002820738420000084
The corresponding target mesh vertex is set to be,
Figure BDA0002820738420000085
represent
Figure BDA0002820738420000086
Corresponding target mesh vertex, s represents a user-specified scaling factor;
step twelve: calculating left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling in the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the left-right consistency energy as Edepth
Figure BDA0002820738420000087
Figure BDA0002820738420000088
Wherein,
Figure BDA0002820738420000089
represent
Figure BDA00028207384200000810
The position of the horizontal coordinate of (a),
Figure BDA00028207384200000811
represent
Figure BDA00028207384200000812
The position of the horizontal coordinate of (a),
Figure BDA00028207384200000813
to represent
Figure BDA00028207384200000814
Is measured in the vertical coordinate position of the optical system,
Figure BDA00028207384200000815
represent
Figure BDA00028207384200000816
Is measured in the vertical coordinate position of the optical system,
Figure BDA00028207384200000817
represents { dLThe (x, y) } coordinate position is
Figure BDA00028207384200000818
The pixel value of the pixel point of (a),
Figure BDA00028207384200000819
represents UR,kT-th mesh vertex of (1)
Figure BDA00028207384200000820
The position of the horizontal coordinate of (a),
Figure BDA00028207384200000821
represents UR,kT mesh vertex of
Figure BDA00028207384200000822
E represents the horizontal baseline distance between the left viewpoint and the right viewpoint of the stereoscopic image to be processed;
step thirteen: according to Eobject、Eedge、Eback、EimportAnd EdepthCalculating the total energy of the target quadrilateral grids corresponding to all the quadrilateral grids in the { L (x, y) } and the { R (x, y) }, and recording the total energy as Etotal,Etotal=λ1Eobject2Eedge3Eback4Eimport5Edepth(ii) a Then solving by least square optimization
Figure BDA00028207384200000823
Obtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) }, and recording the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }asthe optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) } correspondingly
Figure BDA00028207384200000824
And
Figure BDA00028207384200000829
Figure BDA00028207384200000825
then, an affine transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) } is calculated, and U is calculatedL,kCorresponding optimal target quadrilateral mesh
Figure BDA00028207384200000826
Of (b)The transformation matrix is denoted as
Figure BDA00028207384200000827
Figure BDA00028207384200000828
Figure BDA0002820738420000091
And calculating an affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, and converting U into UR,kCorresponding optimal target quadrilateral mesh
Figure BDA0002820738420000092
Affine transformation matrix of
Figure BDA0002820738420000093
Figure BDA0002820738420000094
Figure BDA0002820738420000095
Wherein λ is1、λ2、λ3、λ4、λ5Are all weighting parameters, min () is a function taking the minimum value,
Figure BDA0002820738420000096
a set of target quadrilateral meshes corresponding to all quadrilateral meshes in { L (x, y) },
Figure BDA0002820738420000097
Figure BDA0002820738420000098
a set of target quadrilateral meshes corresponding to all quadrilateral meshes in { R (x, y) },
Figure BDA0002820738420000099
Figure BDA00028207384200000910
represents UL,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,
Figure BDA00028207384200000911
represents UR,kThe corresponding optimal target quadrilateral mesh is then selected,
Figure BDA00028207384200000912
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200000913
Figure BDA00028207384200000914
corresponding representation
Figure BDA00028207384200000915
The 1 st mesh vertex, the 2 nd mesh vertex, the 3 rd mesh vertex, the 4 th mesh vertex,
Figure BDA0002820738420000101
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA0002820738420000102
Figure BDA0002820738420000103
corresponding representation
Figure BDA0002820738420000104
1 st mesh vertex, 2 nd mesh vertex, 3 rd mesh vertex, 4 th mesh vertex of (a)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe inverse of (a) is,
Figure BDA0002820738420000105
and
Figure BDA0002820738420000106
corresponding representation
Figure BDA0002820738420000107
A horizontal coordinate position and a vertical coordinate position of (c),
Figure BDA0002820738420000108
and
Figure BDA0002820738420000109
corresponding representation
Figure BDA00028207384200001010
A horizontal coordinate position and a vertical coordinate position of,
Figure BDA00028207384200001011
and
Figure BDA00028207384200001012
corresponding representation
Figure BDA00028207384200001013
A horizontal coordinate position and a vertical coordinate position of (c),
Figure BDA00028207384200001014
and
Figure BDA00028207384200001015
corresponding representation
Figure BDA00028207384200001016
Horizontal coordinate position and vertical coordinate position of (A)R,k)TIs AR,kTranspose of (A) ((A)R,k)TAR,k)-1Is (A)R,k)TAR,kThe inverse of (a) is,
Figure BDA00028207384200001017
and
Figure BDA00028207384200001018
corresponding representation
Figure BDA00028207384200001019
A horizontal coordinate position and a vertical coordinate position of (c),
Figure BDA00028207384200001020
and
Figure BDA00028207384200001021
corresponding representation
Figure BDA00028207384200001022
A horizontal coordinate position and a vertical coordinate position of,
Figure BDA00028207384200001023
and
Figure BDA00028207384200001024
corresponding representation
Figure BDA00028207384200001025
A horizontal coordinate position and a vertical coordinate position of,
Figure BDA00028207384200001026
and
Figure BDA00028207384200001027
corresponding representation
Figure BDA00028207384200001028
Horizontal coordinate position and vertical coordinate position of (a);
fourteen steps: according to an affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the (L (x, y) } process the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the (L (x, y) } after the affine transformation matrix transformation, and use the U-shaped matrix to transform the U-shaped matrix into the U-shaped matrixL,kThe position of the middle horizontal coordinate is x'L,kAnd the vertical coordinate position is y'L,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded as
Figure BDA00028207384200001029
And
Figure BDA00028207384200001030
Figure BDA00028207384200001031
then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after affine transformation matrix transformation, a left viewpoint image after zooming is obtained and recorded as a left viewpoint image
Figure BDA00028207384200001032
Wherein x 'is more than or equal to 1'L,k≤W,1≤y'L,k≤H,
Figure BDA00028207384200001033
X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the zoomed stereo image, H is also the height of the zoomed stereo image,
Figure BDA00028207384200001034
represent
Figure BDA00028207384200001035
The pixel value of the pixel point with the middle coordinate position (x ', y');
similarly, according to the affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after the affine transformation matrix transformation, and converting U into UR,kThe position of the middle horizontal coordinate is x'R,kAnd the vertical coordinate position is y'R,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded as
Figure BDA0002820738420000111
And
Figure BDA0002820738420000112
Figure BDA0002820738420000113
then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after affine transformation matrix transformation, a zoomed right viewpoint image is obtained and recorded as
Figure BDA0002820738420000114
Wherein x 'is more than or equal to 1'R,k≤W,1≤y'R,k≤H,
Figure BDA0002820738420000115
1≤x'≤W',1≤y'≤H,
Figure BDA0002820738420000116
To represent
Figure BDA0002820738420000117
The pixel value of the pixel point with the middle coordinate position (x ', y');
a zoomed stereoscopic image is composed of the zoomed left viewpoint image and the zoomed right viewpoint image.
Compared with the prior art, the invention has the advantages that:
the method comprises the steps of firstly obtaining an expected grid of each quadrilateral grid in a left viewpoint image and a right viewpoint image of a three-dimensional image, then extracting coordinate offset energy of target quadrilateral grids corresponding to all quadrilateral grids falling in an object selected by a user, coordinate offset energy of target quadrilateral grids corresponding to all quadrilateral grids falling in an object boundary selected by the user, background holding energy of target quadrilateral grids corresponding to all quadrilateral grids falling in a region, size control energy of target quadrilateral grids corresponding to all quadrilateral grids falling in the object selected by the user, left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling in the object selected by the user, and optimizing to minimize the total energy to further obtain an optimal target corresponding to each quadrilateral grid in the left viewpoint image and the right viewpoint image of the three-dimensional image The method comprises the steps of obtaining a zoomed left viewpoint image and a zoomed right viewpoint image according to an affine transformation matrix of each optimal target quadrilateral grid, so that the zoomed three-dimensional image can keep an accurate object shape and an accurate target focusing depth, the zoomed three-dimensional image has an immersion feeling of close-distance viewing and a higher depth feeling, and higher three-dimensional experience quality can be obtained.
Drawings
FIG. 1 is a block diagram of a general implementation of the method of the present invention;
FIG. 2a is a left viewpoint Image of an original stereoscopic Image of "Image 1";
FIG. 2b is a right viewpoint Image of the original stereoscopic Image of "Image 1";
fig. 2c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 27.25 mm;
fig. 2d is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 27.25 mm;
fig. 2e is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 27.99 mm;
fig. 2f is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 27.99 mm;
fig. 2g is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 1" being 28.74 mm;
fig. 2h is a right viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 1";
FIG. 3a is a left viewpoint Image of an original stereoscopic Image of "Image 2";
FIG. 3b is a right viewpoint Image of the original stereoscopic Image of "Image 2";
fig. 3c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 2" being 27.25 mm;
fig. 3d is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 2" being 27.25 mm;
fig. 3e is a left viewpoint Image of the zoomed stereoscopic Image of 27.99 mm in focal length f of "Image 2";
fig. 3f is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 2" being 27.99 mm;
fig. 3g is a left viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 2";
fig. 3h is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 2" being 28.74 mm;
FIG. 4a is a left viewpoint Image of an original stereoscopic Image of "Image 3";
FIG. 4b is a right viewpoint Image of an original stereoscopic Image of "Image 3";
fig. 4c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 3" being 27.25 mm;
fig. 4d is a right viewpoint Image of the zoomed stereoscopic Image of 27.25 mm with focal length f of "Image 3";
fig. 4e is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 3" being 27.99 mm;
fig. 4f is a right viewpoint Image of the zoomed stereoscopic Image of 27.99 mm with the focal length f of "Image 3";
fig. 4g is a left viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 3";
fig. 4h is a right viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 3";
FIG. 5a is a left viewpoint Image of an original stereoscopic Image of "Image 4";
FIG. 5b is a right viewpoint Image of an original stereoscopic Image of "Image 4";
fig. 5c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 4" being 27.25 mm;
fig. 5d is a right viewpoint Image of the zoomed stereoscopic Image of 27.25 mm with focal length f of "Image 4";
fig. 5e is a left viewpoint Image of the zoomed stereoscopic Image of 27.99 mm in focal length f of "Image 4";
fig. 5f is a right viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 4" being 27.99 mm;
fig. 5g is a left viewpoint Image of the zoomed stereoscopic Image with a focal length f of "Image 4" of 28.74 mm;
fig. 5h is a right viewpoint Image of the zoomed stereoscopic Image with a focal length f of "Image 4" of 28.74 mm;
FIG. 6a is a left viewpoint Image of an original stereoscopic Image of "Image 5";
FIG. 6b is a right viewpoint Image of an original stereoscopic Image of "Image 5";
fig. 6c is a left viewpoint Image of the zoomed stereoscopic Image with focal length f of "Image 5" being 27.25 mm;
fig. 6d is a right viewpoint Image of the zoomed stereoscopic Image of 27.25 mm with focal length f of "Image 5";
fig. 6e is a left viewpoint Image of the zoomed stereoscopic Image of 27.99 mm in focal length f of "Image 5";
fig. 6f is a right viewpoint Image of the zoomed stereoscopic Image of 27.99 mm with the focal length f of "Image 5";
fig. 6g is a left viewpoint Image of the zoomed stereoscopic Image of 28.74 mm with focal length f of "Image 5";
fig. 6h shows a right viewpoint Image of a zoomed stereoscopic Image having a focal length f of 28.74 mm, which is "Image 5".
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The general implementation block diagram of the stereo image zooming method provided by the invention is shown in fig. 1, and the method comprises the following steps:
the method comprises the following steps: left, right, and left parallax images of a stereoscopic image of width W and height H to be processed are correspondingly denoted as { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein W and H can be evenly divided by 2, x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, and L (x, y) represents { L (x, y) } middle seatThe pixel value of a pixel with a (x, y) coordinate position, { R (x, y) } represents the pixel value of a pixel with a (x, y) coordinate position in { R (x, y) }, and dL(x, y) represents { d }LThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) }.
Step two: adopting the existing SIFT-Flow method to establish the matching relationship between { L (x, y) } and { R (x, y) }, obtaining SIFT-Flow vectors of each pixel point in the { L (x, y) }, and marking the SIFT-Flow vectors of the pixel points with the coordinate positions (x, y) in the { L (x, y) }asvL(x,y),
Figure BDA0002820738420000141
Wherein,
Figure BDA0002820738420000142
for the purpose of indicating the direction of the horizon,
Figure BDA0002820738420000143
for the purpose of indicating the vertical direction of the,
Figure BDA0002820738420000144
denotes vL(x, y) a horizontal offset amount,
Figure BDA0002820738420000145
denotes vL(x, y) vertical offset.
Step three: let the coordinate position of the image principal point of { L (x, y) } be noted as
Figure BDA0002820738420000146
Let the coordinate position of the image principal point of { R (x, y) } be noted as
Figure BDA0002820738420000147
Then according to the coordinate position in { L (x, y) } is
Figure BDA0002820738420000148
The pixel point of (1) is the SIFT-Flow vector of the image principal point of { L (x, y) }, and the coordinate position in the { L (x, y) } is determined to be
Figure BDA0002820738420000149
The pixel point of (A) is the pixel point matched with the image principal point of (L (x, y) } in (R (x, y) }, and the coordinate position of the matched pixel point is recorded as
Figure BDA00028207384200001410
And calculates the vertical deviation of { L (x, y) } and { R (x, y) }, denoted as b,
Figure BDA00028207384200001411
wherein,
Figure BDA00028207384200001412
denotes a coordinate position of { L (x, y) } in L (x, y) } is
Figure BDA00028207384200001413
Is the SIFT-Flow vector of the image principal point of { L (x, y) }
Figure BDA00028207384200001414
The amount of horizontal offset of (a),
Figure BDA00028207384200001415
denotes a coordinate position of { L (x, y) } as
Figure BDA00028207384200001416
The pixel point of (1) is the SIFT-Flow vector of the image principal point of { L (x, y) }
Figure BDA00028207384200001417
Is offset vertically.
Step four: let the focal lengths of { L (x, y) } and { R (x, y) } be f0Let the object distances of { L (x, y) } and { R (x, y) } be noted as
Figure BDA00028207384200001418
Then, according to the focal length f specified by the user, the magnification factor of { L (x, y) } and { R (x, y) }, denoted as a,
Figure BDA00028207384200001419
where θ is the focus specified by the userDistance f and object distances of { L (x, y) } and { R (x, y) }
Figure BDA0002820738420000151
The determined image distance is determined based on the image distance,
Figure BDA0002820738420000152
θ0is a focal length f of { L (x, y) } and { R (x, y) }0And object distances of { L (x, y) } and { R (x, y) }
Figure BDA0002820738420000153
The determined image distance is determined based on the image distance,
Figure BDA0002820738420000154
in this example take f025.00 mm of the total weight of the alloy,
Figure BDA0002820738420000155
mm,. theta.0The user adjusts the image distance by changing the focal lengths of L (x, y) and R (x, y), thereby achieving the purpose of image zooming.
Step five: dividing { L (x, y) } into M quadrilateral grids which do not overlap with each other and have the size of 22 multiplied by 22, and marking the kth quadrilateral grid in { L (x, y) } as UL,k(ii) a Then according to all quadrilateral meshes in { L (x, y) } and { dL(x, y) }, acquiring all non-overlapping quadrilateral grids with the size of 22 multiplied by 22 in the { R (x, y) }, and marking the kth quadrilateral grid in the { R (x, y) }asUR,k(ii) a Wherein,
Figure BDA0002820738420000156
(symbol)
Figure BDA0002820738420000157
is a sign of a down rounding operation, k is a positive integer, k is more than or equal to 1 and less than or equal to M, UL,kDescribed by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA0002820738420000158
Figure BDA0002820738420000159
corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,
Figure BDA00028207384200001510
to be provided with
Figure BDA00028207384200001511
Horizontal coordinate position of
Figure BDA00028207384200001512
And vertical coordinate position
Figure BDA00028207384200001513
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200001514
Figure BDA00028207384200001515
to be provided with
Figure BDA00028207384200001516
Horizontal coordinate position of (2)
Figure BDA00028207384200001517
And vertical coordinate position
Figure BDA00028207384200001518
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200001519
Figure BDA00028207384200001520
to be provided with
Figure BDA00028207384200001521
Horizontal coordinate position of (2)
Figure BDA00028207384200001522
And vertical coordinate position
Figure BDA00028207384200001523
To be described, the method has the advantages that,
Figure BDA00028207384200001524
Figure BDA00028207384200001525
to be provided with
Figure BDA00028207384200001526
Horizontal coordinate position of
Figure BDA00028207384200001527
And vertical coordinate position
Figure BDA00028207384200001528
To be described, the method has the advantages that,
Figure BDA00028207384200001529
UR,kdescribed by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200001530
Figure BDA00028207384200001531
corresponds to and represents UR,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,
Figure BDA00028207384200001532
to be provided with
Figure BDA00028207384200001533
Horizontal coordinate position of
Figure BDA00028207384200001534
And vertical coordinate position
Figure BDA00028207384200001535
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200001536
represents { d }LThe (x, y) } coordinate position is
Figure BDA0002820738420000161
The pixel value of the pixel point of (a),
Figure BDA0002820738420000162
to be provided with
Figure BDA0002820738420000163
Horizontal coordinate position of (2)
Figure BDA0002820738420000164
And vertical coordinate position
Figure BDA0002820738420000165
To describe the above-mentioned components in a certain way,
Figure BDA0002820738420000166
Figure BDA0002820738420000167
represents { dLThe (x, y) } coordinate position is
Figure BDA0002820738420000168
The pixel value of the pixel point of (a),
Figure BDA0002820738420000169
to be provided with
Figure BDA00028207384200001610
Horizontal coordinate position of (2)
Figure BDA00028207384200001611
And vertical coordinate position
Figure BDA00028207384200001612
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200001613
Figure BDA00028207384200001614
represents { dL(x, y) } coordinate position of
Figure BDA00028207384200001615
The pixel value of the pixel point of (a),
Figure BDA00028207384200001616
to be provided with
Figure BDA00028207384200001617
Horizontal coordinate position of (2)
Figure BDA00028207384200001618
And vertical coordinate position
Figure BDA00028207384200001619
To be described, the method has the advantages that,
Figure BDA00028207384200001620
Figure BDA00028207384200001621
represents { d }L(x, y) } coordinate position of
Figure BDA00028207384200001622
The pixel value of the pixel point.
Step six: calculating a desired grid for each quadrilateral grid in { L (x, y) } according to a magnification a of { L (x, y) } and { R (x, y) } and a vertical deviation b of { L (x, y) } and { R (x, y) }, calculating a desired grid for each quadrilateral grid in { L (x, y) }, and applying U to the desired gridL,kIs marked as
Figure BDA00028207384200001623
Similarly, a desired grid of each quadrangular grid in { R (x, y) } is calculated from the magnification a of { L (x, y) } and { R (x, y) } and the vertical deviation b of { L (x, y) } and { R (x, y) }, U is calculatedR,kIs marked as
Figure BDA00028207384200001624
Wherein,
Figure BDA00028207384200001625
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200001626
Figure BDA00028207384200001627
corresponding representation
Figure BDA00028207384200001628
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure BDA00028207384200001629
The respective vertex of the desired mesh is,
Figure BDA00028207384200001630
to be provided with
Figure BDA00028207384200001631
Horizontal coordinate position of
Figure BDA00028207384200001632
And vertical coordinate position
Figure BDA00028207384200001633
To be described, the method has the advantages that,
Figure BDA00028207384200001634
Figure BDA00028207384200001635
to be provided with
Figure BDA00028207384200001636
Horizontal coordinate position of
Figure BDA00028207384200001637
And vertical coordinate position
Figure BDA00028207384200001638
To be described, the method has the advantages that,
Figure BDA00028207384200001639
Figure BDA00028207384200001640
Figure BDA00028207384200001641
to be provided with
Figure BDA00028207384200001642
Horizontal coordinate position of
Figure BDA00028207384200001643
And vertical coordinate position
Figure BDA00028207384200001644
To describe the above-mentioned components in a certain way,
Figure BDA0002820738420000171
Figure BDA0002820738420000172
to be provided with
Figure BDA0002820738420000173
Horizontal coordinate position of
Figure BDA0002820738420000174
And vertical coordinate position
Figure BDA0002820738420000175
To be described, the method has the advantages that,
Figure BDA0002820738420000176
Figure BDA0002820738420000177
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA0002820738420000178
Figure BDA0002820738420000179
corresponding representation
Figure BDA00028207384200001710
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure BDA00028207384200001711
The respective vertices of the desired mesh are,
Figure BDA00028207384200001712
to be provided with
Figure BDA00028207384200001713
Horizontal coordinate position of
Figure BDA00028207384200001714
And vertical coordinate position
Figure BDA00028207384200001715
To be described, the method has the advantages that,
Figure BDA00028207384200001716
Figure BDA00028207384200001717
to be provided with
Figure BDA00028207384200001718
Horizontal coordinate position of (2)
Figure BDA00028207384200001719
And vertical coordinate position
Figure BDA00028207384200001720
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200001721
Figure BDA00028207384200001722
to be provided with
Figure BDA00028207384200001723
Horizontal coordinate position of
Figure BDA00028207384200001724
And vertical coordinate position
Figure BDA00028207384200001725
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200001726
Figure BDA00028207384200001727
Figure BDA00028207384200001728
to be provided with
Figure BDA00028207384200001729
Horizontal coordinate position of
Figure BDA00028207384200001730
And vertical coordinate position
Figure BDA00028207384200001731
To be described, the method has the advantages that,
Figure BDA00028207384200001732
step seven: each quadrilateral mesh in { L (x, y) } corresponds to a target quadrilateral mesh, and U is addedL,kThe corresponding target quadrilateral mesh is noted
Figure BDA00028207384200001733
Similarly, each quadrilateral mesh in { R (x, y) } corresponds to a target quadrilateral mesh, and U is added to the target quadrilateral meshR,kThe corresponding target quadrilateral mesh is noted
Figure BDA00028207384200001734
Wherein,
Figure BDA00028207384200001735
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200001736
Figure BDA00028207384200001737
corresponding representation
Figure BDA00028207384200001738
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure BDA0002820738420000181
The respective target mesh vertex is set to be,
Figure BDA0002820738420000182
to be provided with
Figure BDA0002820738420000183
Horizontal coordinate position of
Figure BDA0002820738420000184
And vertical coordinate position
Figure BDA0002820738420000185
To be described, the method has the advantages that,
Figure BDA0002820738420000186
Figure BDA0002820738420000187
to be provided with
Figure BDA0002820738420000188
Horizontal coordinate position of
Figure BDA0002820738420000189
And vertical coordinate position
Figure BDA00028207384200001810
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200001811
Figure BDA00028207384200001812
to be provided with
Figure BDA00028207384200001813
Horizontal coordinate position of
Figure BDA00028207384200001814
And vertical coordinate position
Figure BDA00028207384200001815
To be described, the method has the advantages that,
Figure BDA00028207384200001816
Figure BDA00028207384200001817
to be provided with
Figure BDA00028207384200001818
Horizontal coordinate position of (2)
Figure BDA00028207384200001819
And vertical coordinate position
Figure BDA00028207384200001820
To be described, the method has the advantages that,
Figure BDA00028207384200001821
Figure BDA00028207384200001822
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA00028207384200001823
Figure BDA00028207384200001824
corresponding representation
Figure BDA00028207384200001825
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure BDA00028207384200001826
The respective corresponding target mesh vertices are,
Figure BDA00028207384200001827
to be provided with
Figure BDA00028207384200001828
Horizontal coordinate position of
Figure BDA00028207384200001829
And vertical coordinate position
Figure BDA00028207384200001830
To be described, the method has the advantages that,
Figure BDA00028207384200001831
Figure BDA00028207384200001832
to be provided with
Figure BDA00028207384200001833
Horizontal coordinate position of
Figure BDA00028207384200001834
And vertical coordinate position
Figure BDA00028207384200001835
To describe the above-mentioned components in a certain way,
Figure BDA00028207384200001836
Figure BDA00028207384200001837
to be provided with
Figure BDA00028207384200001838
Horizontal coordinate position of
Figure BDA00028207384200001839
And vertical coordinate position
Figure BDA00028207384200001840
To be described, the method has the advantages that,
Figure BDA00028207384200001841
Figure BDA00028207384200001842
to be provided with
Figure BDA00028207384200001843
Horizontal coordinate position of
Figure BDA00028207384200001844
And sit uprightTarget position
Figure BDA00028207384200001845
To be described, the method has the advantages that,
Figure BDA00028207384200001846
step eight: a user manually selects an object in a to-be-processed stereo image through editing operation; then, according to the desired grids of all quadrilateral grids in { L (x, y) } and { R (x, y) } falling in the object selected by the user, the coordinate offset energy of the target quadrilateral grid corresponding to all quadrilateral grids in { L (x, y) } and { R (x, y) } falling in the object selected by the user is calculated, and is recorded as Eobject
Figure BDA00028207384200001847
Wherein the symbol "| | |" is a euclidean distance-solving symbol, t is a positive integer, t is 1,2,3,4,
Figure BDA00028207384200001848
represent
Figure BDA00028207384200001849
The t-th mesh vertex of (2),
Figure BDA00028207384200001850
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the user-selected object in L (x, y),
Figure BDA00028207384200001851
to represent
Figure BDA00028207384200001852
The t-th mesh vertex of (2),
Figure BDA00028207384200001853
to represent
Figure BDA00028207384200001854
The t-th mesh vertex of (2),
Figure BDA00028207384200001855
a set of mesh vertices representing target quadrilateral meshes of R (x, y) corresponding to all quadrilateral meshes falling within the user-selected object,
Figure BDA0002820738420000191
to represent
Figure BDA0002820738420000192
The t-th mesh vertex of (1).
Step nine: according to the expected grids of all quadrilateral grids in the { L (x, y) } and the { R (x, y) } which fall on the boundary of the object selected by the user, the coordinate offset energy of the target quadrilateral grid corresponding to all quadrilateral grids in the { L (x, y) } and the { R (x, y) } which fall on the boundary of the object selected by the user is calculated and recorded as Eedge
Figure BDA0002820738420000193
Wherein,
Figure BDA0002820738420000194
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes that fall within the user-selected object boundary in { L (x, y) },
Figure BDA0002820738420000195
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes of { R (x, y) } that fall within the user-selected object boundary.
Step ten: calculating the background holding energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background area according to the expected meshes of all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background areaback
Figure BDA0002820738420000196
Wherein, the background area is a pair selected by the user in the stereo image to be processedSuch as the area outside the area of the area,
Figure BDA0002820738420000197
a set of mesh vertices representing target quadrangular meshes corresponding to all quadrangular meshes falling within the background region in { L (x, y) },
Figure BDA0002820738420000198
and (2) a set of mesh vertices representing the target quadrilateral meshes corresponding to all quadrilateral meshes falling within the background region in the { R (x, y) }.
Step eleven: calculating size control energy of target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the size control energy as Eimport
Figure BDA0002820738420000199
Wherein,
Figure BDA00028207384200001910
Figure BDA00028207384200001911
denotes a mesh vertex of j-th in the horizontal direction and i-th in the vertical direction in { L (x, y) },
Figure BDA0002820738420000201
denotes a mesh vertex of j +1 th in the horizontal direction and i-th in the vertical direction in { L (x, y) },
Figure BDA0002820738420000202
to represent
Figure BDA0002820738420000203
The corresponding vertex of the target mesh is set,
Figure BDA0002820738420000204
represent
Figure BDA0002820738420000205
Correspond toThe target mesh vertices of (2) are,
Figure BDA0002820738420000206
denotes a mesh vertex of jth in the horizontal direction and ith in the vertical direction in { R (x, y) },
Figure BDA0002820738420000207
denotes a mesh vertex of (j + 1) th in the horizontal direction and (i) th in the vertical direction among { R (x, y) },
Figure BDA0002820738420000208
represent
Figure BDA0002820738420000209
The corresponding target mesh vertex is set to be,
Figure BDA00028207384200002010
to represent
Figure BDA00028207384200002011
The corresponding target mesh vertex, s, represents the scaling factor specified by the user, and in this embodiment, s is equal to 1, that is, the original size of the important content is maintained.
Step twelve: calculating left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the left-right consistency energy as Edepth
Figure BDA00028207384200002012
Figure BDA00028207384200002013
Wherein,
Figure BDA00028207384200002014
represent
Figure BDA00028207384200002015
The position of the horizontal coordinate of (a),
Figure BDA00028207384200002016
to represent
Figure BDA00028207384200002017
The position of the horizontal coordinate of (a),
Figure BDA00028207384200002018
represent
Figure BDA00028207384200002019
The position of the vertical coordinate of (a),
Figure BDA00028207384200002020
to represent
Figure BDA00028207384200002021
Is measured in the vertical coordinate position of the optical system,
Figure BDA00028207384200002022
represents { dL(x, y) } coordinate position of
Figure BDA00028207384200002023
The pixel value of the pixel point of (a),
Figure BDA00028207384200002024
represents UR,kT mesh vertex of
Figure BDA00028207384200002025
The position of the horizontal coordinate of (a),
Figure BDA00028207384200002026
represents UR,kT mesh vertex of
Figure BDA00028207384200002027
E denotes the horizontal baseline distance between the left viewpoint and the right viewpoint of the stereoscopic image to be processed, and in this embodiment, e is 176.252 mm.
Step thirteen: according to Eobject、Eedge、Eback、EimportAnd EdepthCalculating the total energy of the target quadrilateral grids corresponding to all the quadrilateral grids in the { L (x, y) } and the { R (x, y) }, and recording the total energy as Etotal,Etotal=λ1Eobject2Eedge3Eback4Eimport5Edepth(ii) a Then solving by least squares optimization
Figure BDA00028207384200002028
Obtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) }, and recording the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) }asthe optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) } correspondingly
Figure BDA00028207384200002029
And
Figure BDA0002820738420000211
Figure BDA0002820738420000212
then, an affine transformation matrix of the optimal target quadrilateral grids corresponding to each quadrilateral grid in the { L (x, y) } is calculated, and U is calculatedL,kCorresponding optimal target quadrilateral mesh
Figure BDA0002820738420000213
Affine transformation matrix of
Figure BDA0002820738420000214
Figure BDA0002820738420000215
Figure BDA0002820738420000216
And calculating an affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, and converting U into UR,kCorresponding optimal target quadrilateral mesh
Figure BDA0002820738420000217
Affine transformation matrix of
Figure BDA0002820738420000218
Figure BDA0002820738420000219
Figure BDA00028207384200002110
Wherein λ is1、λ2、λ3、λ4、λ5Are all weighted parameters, in this example, take λ1=3、λ1=4、λ3=2、λ4=4、λ5Min () is the minimum function taken as 1,
Figure BDA00028207384200002111
represents a set of target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) },
Figure BDA00028207384200002112
Figure BDA00028207384200002113
a set of target quadrilateral meshes corresponding to all quadrilateral meshes in { R (x, y) },
Figure BDA0002820738420000221
Figure BDA0002820738420000222
represents UL,kThe corresponding optimal target quadrilateral mesh is then selected,
Figure BDA0002820738420000223
represents UR,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,
Figure BDA0002820738420000224
through its upper left, lower left, upper right and lower right 4 grid topsA collection of points to describe the set of points,
Figure BDA0002820738420000225
Figure BDA0002820738420000226
corresponding representation
Figure BDA0002820738420000227
1 st mesh vertex, 2 nd mesh vertex, 3 rd mesh vertex, 4 th mesh vertex,
Figure BDA0002820738420000228
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure BDA0002820738420000229
Figure BDA00028207384200002210
corresponding representation
Figure BDA00028207384200002211
1 st mesh vertex, 2 nd mesh vertex, 3 rd mesh vertex, 4 th mesh vertex of (a)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe reverse of (c) is true,
Figure BDA00028207384200002212
and
Figure BDA00028207384200002213
corresponding representation
Figure BDA00028207384200002214
A horizontal coordinate position and a vertical coordinate position of (c),
Figure BDA00028207384200002215
and
Figure BDA00028207384200002216
corresponding representation
Figure BDA00028207384200002217
A horizontal coordinate position and a vertical coordinate position of,
Figure BDA00028207384200002218
and
Figure BDA00028207384200002219
corresponding representation
Figure BDA00028207384200002220
A horizontal coordinate position and a vertical coordinate position of,
Figure BDA00028207384200002221
and
Figure BDA00028207384200002222
corresponding representation
Figure BDA00028207384200002223
(ii) a horizontal coordinate position and a vertical coordinate position of (A)R,k)TIs AR,kTranspose of (A) ((A)R,k)TAR,k)-1Is (A)R,k)TAR,kThe reverse of (c) is true,
Figure BDA00028207384200002224
and
Figure BDA00028207384200002225
corresponding representation
Figure BDA00028207384200002226
A horizontal coordinate position and a vertical coordinate position of (c),
Figure BDA00028207384200002227
and
Figure BDA00028207384200002228
corresponding representation
Figure BDA00028207384200002229
A horizontal coordinate position and a vertical coordinate position of (c),
Figure BDA00028207384200002230
and
Figure BDA00028207384200002231
corresponding representation
Figure BDA00028207384200002232
A horizontal coordinate position and a vertical coordinate position of (c),
Figure BDA00028207384200002233
and
Figure BDA00028207384200002234
corresponding representation
Figure BDA00028207384200002235
A horizontal coordinate position and a vertical coordinate position.
Fourteen steps: according to the affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { L (x, y) }, calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after the affine transformation matrix transformation, and enabling U to be connected with the U through the UL,kThe position of the middle horizontal coordinate is x'L,kAnd the vertical coordinate position is y'L,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded as
Figure BDA00028207384200002236
And
Figure BDA00028207384200002237
Figure BDA00028207384200002238
then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after affine transformation matrix transformation, a left viewpoint image after zooming is obtained and recorded as a left viewpoint image
Figure BDA00028207384200002239
Wherein x 'is more than or equal to 1'L,k≤W,1≤y'L,k≤H,
Figure BDA00028207384200002240
X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the zoomed stereo image, H is the height of the zoomed stereo image,
Figure BDA0002820738420000231
represent
Figure BDA0002820738420000232
The pixel value of the pixel point with the middle coordinate position of (x ', y').
Similarly, according to the affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after the affine transformation matrix transformation, and converting U into UR,kThe position of the middle horizontal coordinate is x'R,kAnd the vertical coordinate position is y'R,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded as
Figure BDA0002820738420000233
And
Figure BDA0002820738420000234
Figure BDA0002820738420000235
then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after affine transformation matrix transformation, a zoomed right viewpoint image is obtained and recorded as
Figure BDA0002820738420000236
Wherein x 'is more than or equal to 1'R,k≤W,1≤y'R,k≤H,
Figure BDA0002820738420000237
1≤x'≤W',1≤y'≤H,
Figure BDA0002820738420000238
To represent
Figure BDA0002820738420000239
The pixel value of the pixel point with the middle coordinate position of (x ', y').
A zoomed stereoscopic image is composed of the zoomed left viewpoint image and the zoomed right viewpoint image.
To further illustrate the feasibility and effectiveness of the method of the present invention, the method of the present invention was tested.
Zoom experiments were performed on five stereoscopic images, Image1, Image2, Image3, Image4, and Image5, using the method of the present invention. Fig. 2a shows a left viewpoint Image of an original stereoscopic Image of "Image 1", fig. 2b shows a right viewpoint Image of an original stereoscopic Image of "Image 1", fig. 2c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 27.25 mm, fig. 2d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 27.25 mm, fig. 2e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 27.99 mm, fig. 2f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 27.99 mm, fig. 2g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 1" with a focal length f of 28.74 mm, fig. 2h shows a right viewpoint Image of "Image 1" with a focal length f of 28.74 mm; fig. 3a shows a left viewpoint Image of an original stereoscopic Image of "Image 2", fig. 3b shows a right viewpoint Image of an original stereoscopic Image of "Image 2", fig. 3c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 3d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 3d shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 3e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 3f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 8627.99 mm", fig. 3f shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", fig. 27.99 mm, fig. 3g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 2", and fig. 3h shows a right viewpoint Image of "Image 2", fig. 3h shows a zoomed stereoscopic Image of "Image 2", and fig. 74; fig. 4a shows a left viewpoint Image of an original stereoscopic Image of "Image 3", fig. 4b shows a right viewpoint Image of an original stereoscopic Image of "Image 3", fig. 4c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 4d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 4d shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 4e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 4f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 8627.99 mm", fig. 4f shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 27.99 mm, fig. 4g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 3", fig. 28.74, and fig. 4h shows a right viewpoint Image of "Image 3", fig. 3674; fig. 5a shows a left viewpoint Image of an original stereoscopic Image of "Image 4", fig. 5b shows a right viewpoint Image of an original stereoscopic Image of "Image 4", fig. 5c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 27.25 mm, fig. 5d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 27.25 mm, fig. 5e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 27.99 mm, fig. 5f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 27.99 mm, fig. 5g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 4" with a focal length f of 28.74, fig. 5h shows a right viewpoint Image of "Image 4" Image of "Image 3528.74 mm; fig. 6a shows a left viewpoint Image of an original stereoscopic Image of "Image 5", fig. 6b shows a right viewpoint Image of an original stereoscopic Image of "Image 5", fig. 6c shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 27.25 mm, fig. 6d shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 27.25 mm, fig. 6e shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 27.99 mm, fig. 6f shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 27.99 mm, fig. 6g shows a left viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 28.74 mm, and fig. 6h shows a right viewpoint Image of a zoomed stereoscopic Image of "Image 5" with a focal length f of 28.74 mm. As can be seen from fig. 2a to 6h, the zoomed stereoscopic image obtained by the method of the present invention can better preserve the object shape, and the size of the important object can be increased according to the selection of the user.

Claims (1)

1. A stereoscopic image zooming method characterized by comprising the steps of:
the method comprises the following steps: the left viewpoint image, the right viewpoint image, and the left parallax image of the stereoscopic image having the width W and the height H to be processed are correspondingly denoted as { L (x, y) }, { R (x, y) }, and { dL(x, y) }; wherein, W and H can be divided by 2, x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, L (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { L (x, y) }, R (x, y) represents the pixel value of the pixel point with the coordinate position (x, y) in { R (x, y) }, dL(x, y) represents { d }LThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
step two: establishing a matching relation between { L (x, y) } and { R (x, y) } by adopting an SIFT-Flow method to obtain an SIFT-Flow vector of each pixel point in the { L (x, y) }, and marking the SIFT-Flow vector of the pixel point with the coordinate position of (x, y) in the { L (x, y) }asvL(x,y),
Figure FDA0002820738410000011
Wherein,
Figure FDA0002820738410000012
for the purpose of indicating the horizontal direction,
Figure FDA0002820738410000013
for the purpose of indicating the vertical direction,
Figure FDA0002820738410000014
denotes vL(x, y) a horizontal offset amount,
Figure FDA0002820738410000015
denotes vLA vertical offset of (x, y);
step three: let the coordinate position of the image principal point of { L (x, y) } be noted as
Figure FDA0002820738410000016
Let the coordinate position of the image principal point of { R (x, y) } be noted as
Figure FDA0002820738410000017
Figure FDA0002820738410000018
Then according to the coordinate position in { L (x, y) } as
Figure FDA0002820738410000019
The pixel point of (1) is the SIFT-Flow vector of the image principal point of (L (x, y) }, and the coordinate position in (L (x, y) } is determined to be
Figure FDA00028207384100000110
The pixel point of (A) is the pixel point matched with the image principal point of (L (x, y) } in (R (x, y) }, and the coordinate position of the matched pixel point is recorded as
Figure FDA00028207384100000111
And calculates the vertical deviation of { L (x, y) } and { R (x, y) }, denoted as b,
Figure FDA00028207384100000112
wherein,
Figure FDA00028207384100000113
denotes a coordinate position of { L (x, y) } in L (x, y) } is
Figure FDA00028207384100000114
Is the SIFT-Flow vector of the image principal point of { L (x, y) }
Figure FDA00028207384100000115
The amount of horizontal offset of (a),
Figure FDA00028207384100000116
denotes a coordinate position of { L (x, y) } as
Figure FDA00028207384100000117
The pixel point of (1) is the SIFT-Flow vector of the image principal point of { L (x, y) }
Figure FDA00028207384100000118
A vertical offset of (d);
step four: let the focal lengths of { L (x, y) } and { R (x, y) } be f0Object distances of { L (x, y) } and { R (x, y) } are denoted as
Figure FDA0002820738410000021
Then, according to the focal length f specified by the user, the magnification of { L (x, y) } and { R (x, y) } is calculated, denoted as a,
Figure FDA0002820738410000022
where θ is the focal length f specified by the user and the object distance of { L (x, y) } and { R (x, y) }
Figure FDA0002820738410000023
The determined image distance is determined based on the image distance,
Figure FDA0002820738410000024
θ0is a focal length f of { L (x, y) } and { R (x, y) }0And object distances of { L (x, y) } and { R (x, y) }
Figure FDA0002820738410000025
The determined image distance is determined based on the image distance,
Figure FDA0002820738410000026
step five: dividing { L (x, y) } into M quadrilateral grids with the size of 22 × 22 and not overlapping with each other, and marking the kth quadrilateral grid in { L (x, y) } as UL,k(ii) a Then according to the sum of all quadrilateral meshes in { L (x, y) } and { d }L(x, y) }, acquiring all non-overlapping quadrilateral grids with the size of 22 multiplied by 22 in the { R (x, y) }, and marking the kth quadrilateral grid in the { R (x, y) }asUR,k(ii) a Wherein,
Figure FDA0002820738410000027
(symbol)
Figure FDA0002820738410000028
is a sign of a down rounding operation, k is a positive integer, k is more than or equal to 1 and less than or equal to M, UL,kDescribed by its set of 4 mesh vertices above left, below left, above right and below right,
Figure FDA0002820738410000029
Figure FDA00028207384100000210
corresponds to and represents UL,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,
Figure FDA00028207384100000211
to be provided with
Figure FDA00028207384100000212
Horizontal coordinate position of
Figure FDA00028207384100000213
And vertical coordinate position
Figure FDA00028207384100000214
To be described, the method has the advantages that,
Figure FDA00028207384100000215
Figure FDA00028207384100000216
to be provided with
Figure FDA00028207384100000217
Horizontal coordinate position of (2)
Figure FDA00028207384100000218
And vertical coordinate position
Figure FDA00028207384100000219
To be described, the method has the advantages that,
Figure FDA00028207384100000220
Figure FDA00028207384100000221
to be provided with
Figure FDA00028207384100000222
Horizontal coordinate position of
Figure FDA00028207384100000223
And vertical coordinate position
Figure FDA00028207384100000224
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000225
Figure FDA00028207384100000226
to be provided with
Figure FDA00028207384100000227
Horizontal coordinate position of (2)
Figure FDA00028207384100000228
And vertical coordinate position
Figure FDA00028207384100000229
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000230
UR,kdescribed by its set of 4 mesh vertices above left, below left, above right and below right,
Figure FDA00028207384100000231
Figure FDA00028207384100000232
corresponds to and represents UR,kA left upper grid vertex as a 1 st grid vertex, a left lower grid vertex as a 2 nd grid vertex, a right upper grid vertex as a 3 rd grid vertex, a right lower grid vertex as a 4 th grid vertex,
Figure FDA00028207384100000233
to be provided with
Figure FDA00028207384100000234
Horizontal coordinate position of
Figure FDA00028207384100000235
And vertical coordinate position
Figure FDA0002820738410000031
To be described, the method has the advantages that,
Figure FDA0002820738410000032
represents { dLThe (x, y) } coordinate position is
Figure FDA0002820738410000033
The pixel value of the pixel point of (a),
Figure FDA0002820738410000034
to be provided with
Figure FDA0002820738410000035
Horizontal coordinate position of
Figure FDA0002820738410000036
And vertical coordinate position
Figure FDA0002820738410000037
To be described, the method has the advantages that,
Figure FDA0002820738410000038
Figure FDA0002820738410000039
represents { d }LThe (x, y) } coordinate position is
Figure FDA00028207384100000310
The pixel value of the pixel point of (a),
Figure FDA00028207384100000311
to be provided with
Figure FDA00028207384100000312
Horizontal coordinate position of
Figure FDA00028207384100000313
And vertical coordinate position
Figure FDA00028207384100000314
Come and drawIn the above-mentioned manner,
Figure FDA00028207384100000315
Figure FDA00028207384100000316
Figure FDA00028207384100000317
represents { dL(x, y) } coordinate position of
Figure FDA00028207384100000318
The pixel value of the pixel point of (a),
Figure FDA00028207384100000319
to be provided with
Figure FDA00028207384100000320
Horizontal coordinate position of
Figure FDA00028207384100000321
And vertical coordinate position
Figure FDA00028207384100000322
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000323
Figure FDA00028207384100000324
Figure FDA00028207384100000325
represents { dLThe (x, y) } coordinate position is
Figure FDA00028207384100000326
The pixel value of the pixel point of (1);
step six: the magnification a according to { L (x, y) } and { R (x, y) } and { L (x,y) and the vertical deviation b of { R (x, y) }, calculating a desired grid for each quadrilateral grid in { L (x, y) }, and calculating U from the desired gridL,kIs marked as
Figure FDA00028207384100000327
Similarly, a desired grid of each quadrangular grid in { R (x, y) } is calculated from the magnification a of { L (x, y) } and { R (x, y) } and the vertical deviation b of { L (x, y) } and { R (x, y) }, U is calculatedR,kIs marked as
Figure FDA00028207384100000328
Wherein,
Figure FDA00028207384100000329
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure FDA00028207384100000330
Figure FDA00028207384100000331
corresponding representation
Figure FDA00028207384100000332
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure FDA00028207384100000333
The respective vertex of the desired mesh is,
Figure FDA00028207384100000334
to be provided with
Figure FDA00028207384100000335
Horizontal coordinate position of (2)
Figure FDA00028207384100000336
And vertical coordinate position
Figure FDA00028207384100000337
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000338
Figure FDA00028207384100000339
to be provided with
Figure FDA00028207384100000340
Horizontal coordinate position of (2)
Figure FDA00028207384100000341
And vertical coordinate position
Figure FDA00028207384100000342
To be described, the method has the advantages that,
Figure FDA00028207384100000343
Figure FDA00028207384100000344
Figure FDA00028207384100000345
to be provided with
Figure FDA00028207384100000346
Horizontal coordinate position of
Figure FDA00028207384100000347
And vertical coordinate position
Figure FDA00028207384100000348
To be described, the method has the advantages that,
Figure FDA0002820738410000041
Figure FDA0002820738410000042
to be provided with
Figure FDA0002820738410000043
Horizontal coordinate position of
Figure FDA0002820738410000044
And vertical coordinate position
Figure FDA0002820738410000045
To be described, the method has the advantages that,
Figure FDA0002820738410000046
Figure FDA0002820738410000047
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure FDA0002820738410000048
Figure FDA0002820738410000049
corresponding representation
Figure FDA00028207384100000410
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure FDA00028207384100000411
The respective vertex of the desired mesh is,
Figure FDA00028207384100000412
to be provided with
Figure FDA00028207384100000413
Horizontal coordinate position of
Figure FDA00028207384100000414
And vertical coordinate position
Figure FDA00028207384100000415
To be described, the method has the advantages that,
Figure FDA00028207384100000416
Figure FDA00028207384100000417
to be provided with
Figure FDA00028207384100000418
Horizontal coordinate position of
Figure FDA00028207384100000419
And vertical coordinate position
Figure FDA00028207384100000420
To be described, the method has the advantages that,
Figure FDA00028207384100000421
Figure FDA00028207384100000422
to be provided with
Figure FDA00028207384100000423
Horizontal coordinate position of (2)
Figure FDA00028207384100000424
And vertical coordinate position
Figure FDA00028207384100000425
To be described, the method has the advantages that,
Figure FDA00028207384100000426
Figure FDA00028207384100000427
Figure FDA00028207384100000428
to be provided with
Figure FDA00028207384100000429
Horizontal coordinate position of (2)
Figure FDA00028207384100000430
And vertical coordinate position
Figure FDA00028207384100000431
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000432
step seven: each quadrilateral mesh in { L (x, y) } corresponds to a target quadrilateral mesh, and U is addedL,kThe corresponding target quadrilateral mesh is noted
Figure FDA00028207384100000433
Similarly, each quadrilateral mesh in { R (x, y) } corresponds to a target quadrilateral mesh, and U is addedR,kThe corresponding target quadrilateral mesh is noted
Figure FDA00028207384100000434
Wherein,
Figure FDA00028207384100000435
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure FDA00028207384100000436
Figure FDA00028207384100000437
corresponding representation
Figure FDA00028207384100000438
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure FDA0002820738410000051
The respective target mesh vertex is positioned at the edge of the mesh,
Figure FDA0002820738410000052
to be provided with
Figure FDA0002820738410000053
Horizontal coordinate position of
Figure FDA0002820738410000054
And vertical coordinate position
Figure FDA0002820738410000055
To be described, the method has the advantages that,
Figure FDA0002820738410000056
Figure FDA0002820738410000057
to be provided with
Figure FDA0002820738410000058
Horizontal coordinate position of
Figure FDA0002820738410000059
And vertical coordinate position
Figure FDA00028207384100000510
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000511
Figure FDA00028207384100000512
to be provided with
Figure FDA00028207384100000513
Horizontal coordinate position of
Figure FDA00028207384100000514
And vertical coordinate position
Figure FDA00028207384100000515
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000516
Figure FDA00028207384100000517
to be provided with
Figure FDA00028207384100000518
Horizontal coordinate position of (2)
Figure FDA00028207384100000519
And vertical coordinate position
Figure FDA00028207384100000520
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000521
Figure FDA00028207384100000522
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure FDA00028207384100000523
Figure FDA00028207384100000524
corresponding representation
Figure FDA00028207384100000525
The left upper grid vertex as the 1 st grid vertex, the left lower grid vertex as the 2 nd grid vertex, the right upper grid vertex as the 3 rd grid vertex, and the right lower grid vertex as the 4 th grid vertex of (c) also represent the corresponding
Figure FDA00028207384100000526
The respective corresponding target mesh vertices of the mesh,
Figure FDA00028207384100000527
to be provided with
Figure FDA00028207384100000528
Horizontal coordinate position of (2)
Figure FDA00028207384100000529
And vertical coordinate position
Figure FDA00028207384100000530
To be described, the method has the advantages that,
Figure FDA00028207384100000531
Figure FDA00028207384100000532
to be provided with
Figure FDA00028207384100000533
Horizontal coordinate position of
Figure FDA00028207384100000534
And vertical coordinate position
Figure FDA00028207384100000535
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000536
Figure FDA00028207384100000537
to be provided with
Figure FDA00028207384100000538
Horizontal coordinate position of (2)
Figure FDA00028207384100000539
And vertical coordinate position
Figure FDA00028207384100000540
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000541
Figure FDA00028207384100000542
to be provided with
Figure FDA00028207384100000543
Horizontal coordinate position of (2)
Figure FDA00028207384100000544
And vertical coordinate position
Figure FDA00028207384100000545
To describe the above-mentioned components in a certain way,
Figure FDA00028207384100000546
step eight: a user manually selects an object in a to-be-processed stereo image through editing operation; then according to { L (x) }Y) and { R (x, y) } the desired grid of all quadrilateral grids falling within the user-selected object, calculating a coordinate offset energy, denoted E, of the target quadrilateral grid corresponding to all quadrilateral grids falling within the user-selected object in { L (x, y) } and { R (x, y) }object
Figure FDA00028207384100000547
Wherein the symbol "| | |" is a euclidean distance-solving symbol, t is a positive integer, t is 1,2,3,4,
Figure FDA00028207384100000548
represent
Figure FDA00028207384100000549
The t-th mesh vertex of (2),
Figure FDA00028207384100000550
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the user-selected object in L (x, y),
Figure FDA00028207384100000551
to represent
Figure FDA00028207384100000552
The t-th mesh vertex of (2),
Figure FDA00028207384100000553
represent
Figure FDA00028207384100000554
The t-th mesh vertex of (2),
Figure FDA00028207384100000555
a set of mesh vertices representing target quadrilateral meshes of { R (x, y) } corresponding to all quadrilateral meshes falling within the user-selected object,
Figure FDA0002820738410000061
represent
Figure FDA0002820738410000062
The t-th mesh vertex of (1);
step nine: calculating coordinate offset energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes of { L (x, y) } and { R (x, y) } falling at the user-selected object boundary according to the expected meshes of all quadrilateral meshes of { L (x, y) } and { R (x, y) } falling at the user-selected object boundary, wherein the coordinate offset energy is denoted as Eedge
Figure FDA0002820738410000063
Wherein,
Figure FDA0002820738410000064
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes that fall within the user-selected object boundary in { L (x, y) },
Figure FDA0002820738410000065
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes that fall within the user-selected object boundary in { R (x, y) };
step ten: calculating the background holding energy, denoted as E, of the target quadrilateral meshes corresponding to all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background area according to the expected meshes of all quadrilateral meshes in the { L (x, y) } and the { R (x, y) } which fall in the background area, wherein the expected meshes are the target quadrilateral meshes in the { L (x, y) } and the target quadrilateral meshes in the { R (x, y) } are the target quadrilateral meshes in the background area, and the background holding energy is denoted as Eback
Figure FDA0002820738410000066
Wherein the background area is an area except the area where the object selected by the user is located in the to-be-processed stereo image,
Figure FDA0002820738410000067
representing all quadrilateral nets in { L (x, y) } that fall within the background areaA set of mesh vertices of the target quadrilateral mesh to which the lattice corresponds,
Figure FDA0002820738410000068
a set of mesh vertices representing target quadrilateral meshes corresponding to all quadrilateral meshes falling within the background region in { R (x, y) };
step eleven: calculating the size control energy of the target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the size control energy as Eimport
Figure FDA0002820738410000069
Wherein,
Figure FDA00028207384100000610
Figure FDA00028207384100000611
denotes a mesh vertex of j (th) in the horizontal direction and i (th) in the vertical direction among { L (x, y) },
Figure FDA0002820738410000071
denotes a mesh vertex of j +1 th in the horizontal direction and i-th in the vertical direction among { L (x, y) },
Figure FDA0002820738410000072
represent
Figure FDA0002820738410000073
The corresponding target mesh vertex is set to be,
Figure FDA0002820738410000074
to represent
Figure FDA0002820738410000075
The corresponding vertex of the target mesh is set,
Figure FDA0002820738410000076
denotes a mesh vertex of jth in the horizontal direction and ith in the vertical direction among { R (x, y) },
Figure FDA0002820738410000077
denotes a mesh vertex of (j + 1) th in the horizontal direction and (i) th in the vertical direction among { R (x, y) },
Figure FDA0002820738410000078
represent
Figure FDA0002820738410000079
The corresponding target mesh vertex is set to be,
Figure FDA00028207384100000710
represent
Figure FDA00028207384100000711
Corresponding target mesh vertices, s represents a user-specified scaling factor;
step twelve: calculating left-right consistency energy of target quadrilateral grids corresponding to all quadrilateral grids falling into the object selected by the user in the { L (x, y) } and the { R (x, y) }, and recording the left-right consistency energy as Edepth
Figure FDA00028207384100000712
Figure FDA00028207384100000713
Wherein,
Figure FDA00028207384100000714
to represent
Figure FDA00028207384100000715
The position of the horizontal coordinate of (a),
Figure FDA00028207384100000716
to represent
Figure FDA00028207384100000717
The position of the horizontal coordinate of (a),
Figure FDA00028207384100000718
to represent
Figure FDA00028207384100000719
Is measured in the vertical coordinate position of the optical system,
Figure FDA00028207384100000720
to represent
Figure FDA00028207384100000721
The position of the vertical coordinate of (a),
Figure FDA00028207384100000722
represents { dL(x, y) } coordinate position of
Figure FDA00028207384100000723
The pixel value of the pixel point of (a),
Figure FDA00028207384100000724
represents UR,kT-th mesh vertex of (1)
Figure FDA00028207384100000725
The position of the horizontal coordinate of (a),
Figure FDA00028207384100000726
represents UR,kT-th mesh vertex of (1)
Figure FDA00028207384100000727
E represents the horizontal baseline distance between the left viewpoint and the right viewpoint of the stereoscopic image to be processed;
step thirteen: according toEobject、Eedge、Eback、EimportAnd EdepthThe total energy of the target quadrilateral meshes corresponding to all the quadrilateral meshes in { L (x, y) } and { R (x, y) } is calculated and recorded as Etotal,Etotal=λ1Eobject2Eedge3Eback4Eimport5Edepth(ii) a Then solving by least squares optimization
Figure FDA00028207384100000728
Obtaining a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { L (x, y) } and a set formed by the optimal target quadrilateral grids corresponding to all quadrilateral grids in the { R (x, y) }, and recording the correspondence as
Figure FDA00028207384100000729
And
Figure FDA00028207384100000730
Figure FDA00028207384100000731
then, an affine transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) } is calculated, and U is calculatedL,kCorresponding optimal target quadrilateral mesh
Figure FDA0002820738410000081
Affine transformation matrix of
Figure FDA0002820738410000082
Figure FDA0002820738410000083
Figure FDA0002820738410000084
And calculates the corresponding of each quadrilateral grid in { R (x, y) }Affine transformation matrix of the optimal target quadrilateral mesh, transforming UR,kCorresponding optimal target quadrilateral mesh
Figure FDA0002820738410000085
Affine transformation matrix of
Figure FDA0002820738410000086
Figure FDA0002820738410000087
Figure FDA0002820738410000088
Figure FDA0002820738410000089
Wherein λ is1、λ2、λ3、λ4、λ5All are weighting parameters, min () is a minimum function,
Figure FDA00028207384100000810
a set of target quadrilateral meshes corresponding to all quadrilateral meshes in { L (x, y) },
Figure FDA00028207384100000811
Figure FDA00028207384100000812
represents a set of target quadrilateral meshes corresponding to all quadrilateral meshes in R (x, y),
Figure FDA00028207384100000813
Figure FDA00028207384100000814
represents UL,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,
Figure FDA00028207384100000815
represents UR,kThe corresponding optimal target quadrilateral mesh is selected from the set of target quadrilateral meshes,
Figure FDA0002820738410000091
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure FDA0002820738410000092
Figure FDA0002820738410000093
corresponding representation
Figure FDA0002820738410000094
The 1 st mesh vertex, the 2 nd mesh vertex, the 3 rd mesh vertex, the 4 th mesh vertex,
Figure FDA0002820738410000095
described by its set of 4 mesh vertices above left, below left, above right and below right,
Figure FDA0002820738410000096
Figure FDA0002820738410000097
corresponding representation
Figure FDA0002820738410000098
The 1 st mesh vertex, the 2 nd mesh vertex, the 3 rd mesh vertex, and the 4 th mesh vertex of (A)L,k)TIs AL,kTranspose of (A) ((A)L,k)TAL,k)-1Is (A)L,k)TAL,kThe reverse of (c) is true,
Figure FDA0002820738410000099
and
Figure FDA00028207384100000910
corresponding representation
Figure FDA00028207384100000911
A horizontal coordinate position and a vertical coordinate position of,
Figure FDA00028207384100000912
and
Figure FDA00028207384100000913
corresponding representation
Figure FDA00028207384100000914
A horizontal coordinate position and a vertical coordinate position of,
Figure FDA00028207384100000915
and
Figure FDA00028207384100000916
corresponding representation
Figure FDA00028207384100000917
A horizontal coordinate position and a vertical coordinate position of,
Figure FDA00028207384100000918
and
Figure FDA00028207384100000919
corresponding representation
Figure FDA00028207384100000920
Horizontal coordinate position and vertical coordinate position of (A)R,k)TIs AR,kTranspose of (A) ((A)R,k)TAR,k)-1Is (A)R,k)TAR,kThe inverse of (a) is,
Figure FDA00028207384100000921
and
Figure FDA00028207384100000922
corresponding representation
Figure FDA00028207384100000923
A horizontal coordinate position and a vertical coordinate position of,
Figure FDA00028207384100000924
and
Figure FDA00028207384100000925
corresponding representation
Figure FDA00028207384100000926
A horizontal coordinate position and a vertical coordinate position of (c),
Figure FDA00028207384100000927
and
Figure FDA00028207384100000928
corresponding representation
Figure FDA00028207384100000929
A horizontal coordinate position and a vertical coordinate position of,
Figure FDA00028207384100000930
and
Figure FDA00028207384100000931
corresponding representation
Figure FDA00028207384100000932
A horizontal coordinate position and a vertical coordinate position of (a);
fourteen steps: according to the affine transformation matrix of the optimal target quadrilateral mesh corresponding to each quadrilateral mesh in the { L (x, y) },calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the (L (x, y) } after affine transformation matrix transformation, and converting U into UL,kThe position of the middle horizontal coordinate is x'L,kAnd the vertical coordinate position is y'L,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded as
Figure FDA00028207384100000933
And
Figure FDA00028207384100000934
Figure FDA00028207384100000935
then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { L (x, y) } after affine transformation matrix transformation, obtaining a left viewpoint image after zooming, and recording the left viewpoint image as a left viewpoint image after zooming
Figure FDA00028207384100000936
Wherein x is not less than 1'L,k≤W,1≤y'L,k≤H,
Figure FDA00028207384100000937
X 'is more than or equal to 1 and less than or equal to W', y 'is more than or equal to 1 and less than or equal to H, W' represents the width of the zoomed stereo image, H is also the height of the zoomed stereo image,
Figure FDA0002820738410000101
to represent
Figure FDA0002820738410000102
The pixel value of the pixel point with the middle coordinate position (x ', y');
similarly, according to the affine transformation matrix of the optimal target quadrilateral grid corresponding to each quadrilateral grid in the { R (x, y) }, calculating the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after the affine transformation matrix transformationPosition of UR,kThe position of the middle horizontal coordinate is x'R,kAnd the vertical coordinate position is y'R,kThe horizontal coordinate position and the vertical coordinate position of the pixel point after the affine transformation matrix transformation are correspondingly recorded as
Figure FDA0002820738410000103
And
Figure FDA0002820738410000104
Figure FDA0002820738410000105
then, according to the horizontal coordinate position and the vertical coordinate position of each pixel point in each quadrilateral grid in the { R (x, y) } after affine transformation matrix transformation, obtaining a zoomed right viewpoint image, and recording the zoomed right viewpoint image as the zoomed right viewpoint image
Figure FDA0002820738410000106
Wherein x 'is more than or equal to 1'R,k≤W,1≤y'R,k≤H,
Figure FDA0002820738410000107
1≤x'≤W',1≤y'≤H,
Figure FDA0002820738410000108
Represent
Figure FDA0002820738410000109
The pixel value of the pixel point with the middle coordinate position of (x ', y');
a zoomed stereoscopic image is composed of the zoomed left viewpoint image and the zoomed right viewpoint image.
CN202011417735.6A 2020-12-07 2020-12-07 Three-dimensional image zooming method Active CN112702590B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011417735.6A CN112702590B (en) 2020-12-07 2020-12-07 Three-dimensional image zooming method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011417735.6A CN112702590B (en) 2020-12-07 2020-12-07 Three-dimensional image zooming method

Publications (2)

Publication Number Publication Date
CN112702590A CN112702590A (en) 2021-04-23
CN112702590B true CN112702590B (en) 2022-07-22

Family

ID=75506334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011417735.6A Active CN112702590B (en) 2020-12-07 2020-12-07 Three-dimensional image zooming method

Country Status (1)

Country Link
CN (1) CN112702590B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002054913A (en) * 2000-08-11 2002-02-20 Minolta Co Ltd Three-dimensional data generating system and projector
CN103270760A (en) * 2010-12-23 2013-08-28 美泰有限公司 Method and system for disparity adjustment during stereoscopic zoom
CN103955960A (en) * 2014-03-21 2014-07-30 南京大学 Image viewpoint transformation method based on single input image
CN104301704A (en) * 2013-07-17 2015-01-21 宏达国际电子股份有限公司 Content-aware display adaptation methods
CN107945151A (en) * 2017-10-26 2018-04-20 宁波大学 A kind of reorientation image quality evaluating method based on similarity transformation
CN108810512A (en) * 2018-04-24 2018-11-13 宁波大学 A kind of object-based stereo-picture depth method of adjustment
CN109413404A (en) * 2018-09-06 2019-03-01 宁波大学 A kind of stereo-picture Zooming method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10679370B2 (en) * 2015-02-13 2020-06-09 Carnegie Mellon University Energy optimized imaging system with 360 degree field-of-view

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002054913A (en) * 2000-08-11 2002-02-20 Minolta Co Ltd Three-dimensional data generating system and projector
CN103270760A (en) * 2010-12-23 2013-08-28 美泰有限公司 Method and system for disparity adjustment during stereoscopic zoom
CN104301704A (en) * 2013-07-17 2015-01-21 宏达国际电子股份有限公司 Content-aware display adaptation methods
CN103955960A (en) * 2014-03-21 2014-07-30 南京大学 Image viewpoint transformation method based on single input image
CN107945151A (en) * 2017-10-26 2018-04-20 宁波大学 A kind of reorientation image quality evaluating method based on similarity transformation
CN108810512A (en) * 2018-04-24 2018-11-13 宁波大学 A kind of object-based stereo-picture depth method of adjustment
CN109413404A (en) * 2018-09-06 2019-03-01 宁波大学 A kind of stereo-picture Zooming method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Energy-Constrained Video Retargeting Approach for Color-Plus-Depth 3D Video;Feng Shao;《Journal of Display Technology》;20151219;全文 *
基于网格形变的立体图像内容重组;柴雄力;《图像处理和编码》;20190430;全文 *

Also Published As

Publication number Publication date
CN112702590A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN102741879B (en) Method for generating depth maps from monocular images and systems using the same
JP5068391B2 (en) Image processing device
WO2013005365A1 (en) Image processing apparatus, image processing method, program, and integrated circuit
US20130063571A1 (en) Image processing apparatus and image processing method
US10778955B2 (en) Methods for controlling scene, camera and viewing parameters for altering perception of 3D imagery
RU2690757C1 (en) System for synthesis of intermediate types of light field and method of its operation
CN102098528B (en) Method and device for converting planar image into stereoscopic image
JP6610535B2 (en) Image processing apparatus and image processing method
CN105721768A (en) Method and apparatus for generating adapted slice image from focal stack
CN104093013A (en) Method for automatically regulating image parallax in stereoscopic vision three-dimensional visualization system
Park et al. Efficient viewer-centric depth adjustment based on virtual fronto-parallel planar projection in stereo 3D images
CN110958442B (en) Method and apparatus for processing holographic image data
US10506177B2 (en) Image processing device, image processing method, image processing program, image capture device, and image display device
CN109600667A (en) A method of the video based on grid and frame grouping redirects
CN108810512B (en) A kind of object-based stereo-picture depth method of adjustment
CN112702590B (en) Three-dimensional image zooming method
CN108307170B (en) A kind of stereo-picture method for relocating
JP2017143354A (en) Image processing apparatus and image processing method
CN109413404B (en) A kind of stereo-picture Zooming method
CN112449170B (en) Stereo video repositioning method
CN106910253B (en) Stereo image cloning method based on different camera distances
CN113240573B (en) High-resolution image style transformation method and system for local and global parallel learning
CN114092316A (en) Image processing method, apparatus and storage medium
CN108833876B (en) A kind of stereoscopic image content recombination method
Yan et al. Stereoscopic image generation from light field with disparity scaling and super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant