CN103096113A - Method of generating stereo image array of discrete view collection combined window intercept algorithm - Google Patents

Method of generating stereo image array of discrete view collection combined window intercept algorithm Download PDF

Info

Publication number
CN103096113A
CN103096113A CN2013100519574A CN201310051957A CN103096113A CN 103096113 A CN103096113 A CN 103096113A CN 2013100519574 A CN2013100519574 A CN 2013100519574A CN 201310051957 A CN201310051957 A CN 201310051957A CN 103096113 A CN103096113 A CN 103096113A
Authority
CN
China
Prior art keywords
visual point
discrete visual
point image
image array
discrete
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100519574A
Other languages
Chinese (zh)
Other versions
CN103096113B (en
Inventor
王世刚
吕源治
金福寿
王学军
赵岩
王小雨
李雪松
俞珏琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201310051957.4A priority Critical patent/CN103096113B/en
Publication of CN103096113A publication Critical patent/CN103096113A/en
Application granted granted Critical
Publication of CN103096113B publication Critical patent/CN103096113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method of generating a stereo image array of a discrete view collection combined window intercept algorithm and belongs to the field of stereo image generation technology. The method comprises the following method: collecting a discrete viewpoint image array; calculating the position of a shot target in each discrete viewpoint image; calculating horizontal relative displacement of an intercept window in two arbitrary horizontal adjacent discrete viewpoint images in the discrete viewpoint image array, and vertical relative displacement of the intercept window in two arbitrary vertical adjacent discrete viewpoint images in the discrete viewpoint image array; calculating the size of the intercept window; calculating the position of the lower right corner of the intercept window in a first line and a first row of discrete viewpoint image of the discrete viewpoint image array; intercepting the discrete viewpoint image array to generate sub image arrays; and converting the sub image array into the stereo image array. The method is not limited by collecting equipment and can generate a high-definition stereo image array of actual scenery. Compared with a traditional camera array direct collecting method, the method can greatly reduce shooting cost and workload.

Description

Discrete viewpoint collection is in conjunction with the three-dimensional element image array generation method of window intercepting algorithm
Technical field
The invention belongs to stereo-picture generation technique field, be specifically related to a kind of combination stereogram as the generation method of the neutral volume elements pattern matrix of system.
Background technology
For a long time, to showing that obtaining of world's visual information is mainly derived from single camera and catches, and this acquisition geometry can not bring depth perception, third dimension and to the comprehensive understanding of object to human eye.Along with the development of related discipline and the driving of new technology new demand, stereo display technique arises at the historic moment.Stereo display technique mainly comprises stereo display technique and the true stereo Display Technique of utilizing binocular parallax.The display mode of utilizing the stereo display technique of binocular parallax can be divided into again the bore hole display mode and use auxiliary equipment, wherein, the bore hole display mode mainly is shown as the master with grating, uses the display mode of auxiliary equipment mainly to use the 3D glasses and shows.At present, extensively adopt the stereo display technique of binocular parallax in cinema, the easy realization of this technology and cost are low, but the image transmitting of respectively images of left and right eyes being seen due to this method is to spectators, force spectators to produce third dimension in brain, be easy to produce visual fatigue, and, this technology can't realize the parallax variation at continuous multi angle, is not therefore desirable stereo display method; The true stereo Display Technique mainly comprises holography, body Display Technique and combination stereo display technique.The true stereo Display Technique can be reappeared the full detail of subject in the space, the focal length that spectators regulate eyes by physiology obtains third dimension, can not produce visual fatigue, therefore, becomes the developing trend of stereo display technique.With respect to spatial and temporal resolution limited holography and body Display Technique, the combination stereo display technique can make display at be taken space structure and a position relationship of scenery of reduction in visual-field space widely, the beholder need not any auxiliary equipment and gets final product the stereoscopic views of experiencing full parallax on the spot in person, is the important component part of stereo display technique of new generation.
The combination stereo imaging system mainly comprises collection and shows two parts, in gatherer process, as shown in Figure 1, the rays pass through lens array that sends when actual object and when being recorded on recording medium, we have just obtained a three-dimensional element image array, and each three-dimensional element image in array has recorded respectively the image information of subject diverse location, different angles.In procedure for displaying, as shown in Figure 2, the three-dimensional element image array is presented on the lens arra front by high-resolution flat-panel monitor, and lens arra will pool from the light that the three-dimensional element image array sends the stereoscopic views of necessary being the space.
The lens arra acquisition method is the simplest, the direct method that obtains the three-dimensional element image array, the method uses lens arra and single recording medium directly to take the three-dimensional element image array of 3D object, yet, in actual applications, the method exists a lot of not enough, for example, lower display resolution, narrow visual angle and the less depth of field of watching.For the problems referred to above, a lot of improved acquisition methods have been proposed at present, the people such as J.-S Jang and B.Javidi have proposed based on time-multiplexed MALT method, the method obtains more three-dimensional element image by improving the spatial sampling rate, thereby effectively improved the resolution of reconstructed image, simultaneously, both proposed again the SAII method, the method not only can improve display resolution, and has increased and watch the visual field.Although said method has improved the display effect of combination stereo imaging system to some extent on the basis of lens arra acquisition method, but, for given recording image media, forever exist implacable contradiction between the resolution of three-dimensional element image and quantity.Use camera array to gather and can well solve this contradiction, each camera in array is a three-dimensional element image of records photographing scene respectively.Yet, along with improving constantly of display resolution, the scale of camera array is also in continuous increase, if we wish to obtain a display resolution is 1024 * 768, the three-dimensional element image array that comprises 20 * 20 pixels in each three-dimensional element, need 1024 * 768 cameras, obviously, so large-scale camera array is costliness but also be not easy to regulate not only.Above deficiency in view of lens arra acquisition method and the existence of camera array acquisition method, we need to seek a kind of more effective, more be applicable to combination stereogram as the synthetic method of the three-dimensional element image array of system, this method both can generate high-resolution three-dimensional element image array, simultaneously, do not need again high experimental cost and complicated adjusting work.
Summary of the invention
The object of the present invention is to provide a kind of discrete viewpoint collection in conjunction with the three-dimensional element image array generation method of window intercepting algorithm, realize the stereo display of true scenery.
The present invention includes the following step:
1. gather discrete visual point image array, comprise the following steps:
1.1 initialization: two A-frames of adjusting support stereo track are fixed to a lower position with the height of stereo track, camera is arranged on the stereo track, the stereo track can make camera at the uniform velocity move from right to left, and the experimenter controls the startup of stereo track and stops by remote controller;
1.2 obtain discrete viewpoint image group: utilize level gauge that the plane of stereo track is adjusted to parallel with horizontal plane, and with remote controller with mobile camera moving to the low order end of stereo track, pin the camera shutter line and start simultaneously the stereo track, in the moving process of camera, utilize many discrete visual point images of the continuous shooting function continuous acquisition reference object of camera, the discrete visual point image that collects from left to right is in line according to the sequencing of taking, namely obtains the discrete viewpoint image group that collects when the stereo track is positioned at this height;
1.3 utilize Vertical surveyors' staff, raise two A-frames that support the stereo track, make fixing distance of stereo track rising;
1.4 the shooting process of repeating step 1.1.2 and 1.1.3 obtains a plurality of discrete viewpoint image group;
1.5 with the sequencing of all discrete viewpoint image group according to collection, be arranged in from top to bottom a discrete visual point image array, namely, the one group of discrete viewpoint image group that collects at first is placed on the first row of discrete visual point image array, the one group of discrete viewpoint image group that collects at last is placed on last column of discrete visual point image array;
2. calculate the position of reference object in every discrete visual point image:
the location parameter of reference object in every discrete visual point image comprises: reference object is the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array, reference object is the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array, reference object upper, under, the left and right lateral boundaries is in the first row of discrete visual point image array, position in the discrete visual point image of first row, reference object upper, under, the left and right lateral boundaries is in last column of discrete visual point image array, position in the discrete visual point image of last row,
2.1 determine reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array, its computational methods are: establish DVI i,jRepresent to be positioned in discrete visual point image array the i row, the discrete visual point image that j is capable, at first, with DVI 1,1Individual element ground is to right translation, after each translation, calculates the DVI after translation 1,1With DVI 2,1The Y-PSNR of lap, Y-PSNR is defined as:
PSNR ( s ) = 10 × log 10 [ 255 2 MSE ( s ) ]
In formula, s-is DVI 1,1Translation distance, PSNR (s)-the be DVI after translation 1,1With DVI 2,1The Y-PSNR of lap, MSE (s)-be mean square error, its definition is:
MSE ( s ) = 1 ( X - s ) × Y Σ x = 0 X - 1 - s Σ y = 0 Y - 1 [ DVI 1,1 ( x , y ) - DVI 2,1 ( x + s , y ) ] 2
In formula, x, y-are respectively pixel at DVI 1,1In horizontal and vertical position coordinates, X, Y-are respectively the pixel quantity that the horizontal and vertical direction of discrete visual point image comprises;
Displacement corresponding during then, with the Y-PSNR maximum is as reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array;
2.2 determine reference object vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array, its computational methods are: at first with DVI 1,1And DVI 1,2Carry out transposition, then calculate according to the computational methods identical with reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array;
2.3 determine that upper and lower, the left and right lateral boundaries of reference object is at DVI 1,1In the position, its computational methods are: at first, calculate DVI 1,1And DVI 2,1Error image and carry out medium filtering; Then, with the position of the upper and lower and left margin of all non-zero points in the error image after medium filtering as the upper and lower and left border of reference object at DVI 1,1In the position, with the position of the right margin of all non-zero points in the error image after medium filtering deduct value that reference object obtains after the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array as the right side boundary of reference object at DVI 1,1In the position;
2.4 determine that upper and lower, the left and right lateral boundaries of reference object is in last column of discrete visual point image array, the position in the discrete visual point image of last row, its computational methods are: at first, calculate the error image of the discrete visual point image of last column, row second from the bottom in the discrete visual point image of last column, last row in discrete visual point image array and discrete visual point image array and carry out medium filtering, then, upper with all non-zero points in the error image after medium filtering, the position of lower and right margin is upper as reference object, lower and right side boundary is in last column of discrete visual point image array, position in the discrete visual point image of last row, the position of the left margin of all non-zero points in the error image after medium filtering is added that value that reference object obtains after the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width is as the left border of reference object last column at discrete visual point image array in discrete visual point image array, position in the discrete visual point image of last row,
3. calculate intercepting the window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array of intercepting window in discrete visual point image array:
Intercepting the window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and expression formula of intercepting window vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array in discrete visual point image array are:
MH=DH+delta
MV=DV+delta
In formula, MH, MV-are respectively intercepting the window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array of intercepting window in discrete visual point image array, DH, DV-are respectively the reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and reference object vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array in discrete visual point image array, and delta-is the effect of depth factor;
According to actual needs, delta can carry out any value in allowed limits, and the span of delta is:
delta_max=min[(MH_max-DH),(MV_max-DV)]
delta_min=0
In formula, delta_max, delta_min-are respectively the desirable maximum of delta and minimum value, MH_max, MV_max-are respectively the maximum of intercepting window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array and the maximum of intercepting window vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array, and the minimum value of all numerical value in bracket is got in min (.)-expression;
The expression formula of MH_max and MV_max is:
MH_max=min(X-IR,IL)/(M-1)
MV_max=min(Y-IB,IT)/(N-1)
In formula, IR, IB-are respectively the right side of reference object and downside border at DVI 1,1In the position, IL, IT-are respectively the left side of reference object and boundary in last column of discrete visual point image array, the position in the discrete visual point image of last row, and M, N-are respectively the quantity of the discrete visual point image that in discrete visual point image array, every row and every row comprise;
4. calculate the size of intercepting window:
The expression formula of the size of intercepting window is:
W=IR+(M-1)×MH-IL
H=IB+(N-1)×MV-IT
In formula, W, H-are respectively width and the height of intercepting window;
5. calculate the position of the lower right corner in the discrete visual point image of the first row, first row of discrete visual point image array of intercepting window:
The lower right corner of intercepting window is at DVI 1,1The expression formula of middle position is:
PH=IR
PV=IB
In formula, PH, PV-are respectively the lower right corner of intercepting window at DVI 1,1In horizontal and vertical position coordinates;
6. intercept discrete visual point image array spanning subgraph as array:
With the intercepting window, every discrete visual point image in discrete visual point image array is intercepted spanning subgraph as array, the discrete visual point image quantity that comprises in the subimage quantity that comprises in the subimage array and discrete visual point image array equates, the expression formula that is arranged in subimage array i row, a width subimage that j is capable is:
SI i,j(u,v)=DVI i,j(PH+(i-1)×MH-W+u,PV+(j-1)×MV-H+v)
In formula, SI i,j-be the width subimage that in the subimage array, i is listed as, j is capable, u=1,2 ..., W and v=1,2 ..., H-is respectively the horizontal and vertical position coordinates of pixel in subimage;
7. the subimage array is changed into the three-dimensional element image array:
Comprise W * H three-dimensional element image in the three-dimensional element image array that the subimage array converts to, the size of each three-dimensional element image is M pixel * N pixel, and the expression formula that is arranged in the width three-dimensional element image that three-dimensional element image array p is listed as, q is capable is:
EIp,q(r,t)=SIr,t(p,q)
In formula, EI P, q-be the width three-dimensional element image that in the three-dimensional element image array, p is listed as, q is capable, r=1,2 ..., M and t=1,2 ..., N-is respectively the horizontal and vertical position coordinates of pixel in three-dimensional element image.
The present invention is directed to the acquisition principle of the neutral volume elements pattern matrix of combination stereo imaging system, utilize the acquisition method that single camera combines with the stereo track to obtain discrete visual point image array.For the mapping relations between the imaging relations between discrete visual point image array and subimage array and subimage array and three-dimensional element image array, the present invention proposes a kind of generation method of discrete visual point image array being carried out the three-dimensional element image array of window intercepting, both can reach the purpose that generates high-resolution three-dimensional element image array, simultaneously, need not again expensive collecting device and hard work amount.
The present invention can generate the three-dimensional element image array of larger reference object, and the three-dimensional element image array of generation has the continuous visual angle of watching in procedure for displaying, structural information that can the true reappearance reference object.Compare with the lens arra acquisition method, the present invention is not subjected to the restriction of collecting device, can generate high-resolution three-dimensional element image array.Compare with the camera array acquisition method, the present invention only uses a camera, greatly reduces the workload of taking cost and adjusting.
Description of drawings
Fig. 1 is the three-dimensional element image array gatherer process schematic diagram of combination stereo imaging system
Fig. 2 is the three-dimensional element image array procedure for displaying schematic diagram of combination stereo imaging system
Wherein: 1. actual object 2. light 3. lens arra 4. recording medium 5. three-dimensional element image 6. three-dimensional element image arrays 7. illumination 8. flat-panel monitor 9. stereo-pictures
Fig. 3 is that discrete viewpoint collection is in conjunction with the flow chart of the three-dimensional element image array generation method of window intercepting algorithm
Fig. 4 is discrete visual point image array acquisition platform schematic diagram
Fig. 5 is discrete visual point image array schematic diagram
Fig. 6 is nine discrete visual point images of the 1st, 12 and 24 row in the 1st, 12 and 24 row of the discrete visual point image array after amplifying
Fig. 7 is that upper and lower, the left and right lateral boundaries of reference object is at DVI 1,1In the computational methods flow chart of position
Fig. 8 is that window intercepting algorithm spanning subgraph is as the schematic diagram of array
Embodiment
Below in conjunction with the accompanying drawing example, the present invention is described in further detail.Discrete viewpoint collection comprises the following steps: in conjunction with the detailed process (as shown in Figure 3) of the three-dimensional element image array generation method of window intercepting algorithm
1. gather discrete visual point image array
Be illustrated in figure 4 as the acquisition platform of discrete visual point image array, reference object is two toy lorries, wherein, the truck of front is than the more close shooting camera of the truck of back, and the truck of front has produced the truck of back and has blocked, and the gatherer process of discrete visual point image array comprises the following steps:
The first step: initialization.Two A-frames of adjusting support stereo track are fixed to a lower position with the height of stereo track, camera is arranged on the stereo track, the stereo track can make camera at the uniform velocity move from right to left, and the experimenter controls the startup of stereo track and stops by remote controller.
Second step: obtain discrete viewpoint image group: utilize level gauge that the plane of stereo track is adjusted to parallel with horizontal plane, and with remote controller with mobile camera moving to the low order end of stereo track, pin the camera shutter line and start simultaneously the stereo track, in the moving process of camera, utilize many discrete visual point images of the continuous shooting function continuous acquisition reference object of camera, the discrete visual point image that collects from left to right is in line according to the sequencing of taking, namely obtains the discrete viewpoint image group that collects when the stereo track is positioned at this height;
The 3rd step: utilize Vertical surveyors' staff, raise two A-frames that support the stereo track, make fixing distance of stereo track rising.
The 4th step: repeat second step and the shooting process in the 3rd step, obtain a plurality of discrete viewpoint image group.
The 5th step: with the sequencing of all discrete viewpoint image group according to collection, be arranged in from top to bottom a discrete visual point image array, namely, the one group of discrete viewpoint image group that collects at first is placed on the first row of discrete visual point image array, the one group of discrete viewpoint image group that collects at last is placed on last column of discrete visual point image array.The discrete visual point image array that generates as shown in Figure 5, in order to observe conveniently, nine discrete visual point images of the 1st, 12 and 24 row in the 1st, 12 and 24 row of discrete visual point image array have been amplified respectively in Fig. 6, as can be seen from Figure 6, along with the movement of watching the visual angle, discrete visual point image array has presented red truck from seeing sightless change procedure.
2. calculate the position of reference object in every discrete visual point image
the location parameter of reference object in every discrete visual point image comprises: reference object is the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array, reference object is the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array, reference object upper, under, the left and right lateral boundaries is in the first row of discrete visual point image array, position in the discrete visual point image of first row, reference object upper, under, the left and right lateral boundaries is in last column of discrete visual point image array, position in the discrete visual point image of last row.
If DVI i,jRepresent to be positioned in discrete visual point image array the i row, the discrete visual point image that j is capable.The computational methods of reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array are: at first, and with DVI 1,1Individual element ground is to right translation, after each translation, calculates the DVI after translation 1,1With DVI 2,1The Y-PSNR of lap, Y-PSNR is defined as:
PSNR ( s ) = 10 × log 10 [ 255 2 MSE ( s ) ]
In formula, s-is DVI 1,1Translation distance, PSNR (s)-the be DVI after translation 1,1-with DVI 2,1The Y-PSNR of lap, MSE (s)-be mean square error, its definition is:
MSE ( s ) = 1 ( X - s ) × Y Σ x = 0 X - 1 - s Σ y = 0 Y - 1 [ DVI 1,1 ( x , y ) - DVI 2,1 ( x + s , y ) ] 2
In formula, x, y-are respectively pixel at DVI 1,1In horizontal and vertical position coordinates, X, Y-are respectively the horizontal and vertical resolution of discrete visual point image.
Displacement corresponding during then, with the Y-PSNR maximum is as reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array.
The computational methods of reference object vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array are: at first with DVI 1,1And DVI 1,2Carry out transposition, then calculate according to the computational methods identical with reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array.
Upper and lower, the left and right lateral boundaries of reference object is at DVI 1,1In the computational methods of position be (as shown in Figure 7): at first, calculate DVI 1,1And DVI 2,1Error image and carry out medium filtering; Then, with the position of the upper and lower and left margin of all non-zero points in the error image after medium filtering as the upper and lower and left border of reference object at DVI 1,1In the position, with the position of the right margin of all non-zero points in the error image after medium filtering deduct value that reference object obtains after the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array as the right side boundary of reference object at DVI 1,1In the position.
the computational methods of upper and lower, the left and right lateral boundaries of reference object position in last column of discrete visual point image array, the discrete visual point image of last row are: at first, calculate the error image of the discrete visual point image of last column, row second from the bottom in the discrete visual point image of last column, last row in discrete visual point image array and discrete visual point image array and carry out medium filtering, then, upper with all non-zero points in the error image after medium filtering, the position of lower and right margin is upper as reference object, lower and right side boundary is in last column of discrete visual point image array, position in the discrete visual point image of last row, the position of the left margin of all non-zero points in the error image after medium filtering is added that value that reference object obtains after the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width is as the left border of reference object last column at discrete visual point image array in discrete visual point image array, position in the discrete visual point image of last row.
3. calculate intercepting the window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array of intercepting window in discrete visual point image array, its expression formula is:
MH=DH+delta
MV=DV+delta
In formula, MH, MV-are respectively intercepting the window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array of intercepting window in discrete visual point image array, DH, DV-are respectively the reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and reference object vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array in discrete visual point image array, and delta-is the effect of depth factor.
According to actual needs, delta can carry out any value in allowed limits, and the span of delta is:
delta_max=min[(MH_max-DH),(MV_max-DV)]
delta_min=0
In formula, delta_max, delta_min-are respectively the desirable maximum of delta and minimum value, MH_max, MV_max-are respectively the maximum of intercepting window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array and the maximum of intercepting window vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array, and the minimum value of all numerical value in bracket is got in min ()-expression.
The expression formula of MH_max and MV_max is:
MH_max=min(X-IR,IL)/(M-1)
MV_max=min(Y-IB,IT)/(N-1)
In formula, IR, IB-are respectively the right side of reference object and downside border at DVI 1,1In the position, IL, IT-are respectively the left side of reference object and boundary in last column of discrete visual point image array, the position in the discrete visual point image of last row, and M, N-are respectively the quantity of the discrete visual point image that in discrete visual point image array, every row and every row comprise.
4. calculate the size of intercepting window
The expression formula of the size of intercepting window is:
W=IR+(M-1)×MH-IL
H=IB+(N-1)×MV-IT
In formula, W, H-are respectively width and the height of intercepting window.
5. calculate the position of the lower right corner in the discrete visual point image of the first row, first row of discrete visual point image array of intercepting window
The lower right corner of intercepting window is at DVI 1,1The expression formula of middle position is:
PH=IR
PV=IB
In formula, PH, PV-are respectively the lower right corner of intercepting window at DVI 1,1In horizontal and vertical position coordinates.
6. intercept discrete visual point image array spanning subgraph as array
As shown in Figure 8, for generating the schematic diagram of subimage array, white box representative intercepting window in figure, with the intercepting window, every discrete visual point image in discrete visual point image array is intercepted spanning subgraph as array, the discrete visual point image quantity that comprises in the subimage quantity that comprises in the subimage array and discrete visual point image array equates, the expression formula that is arranged in subimage array i row, a width subimage that j is capable is:
SI i,j(u,v)=DVI i,j(PH+(i-1)×MH-W+u,PV+(j-1)×MV-H+v)
In formula, SI I, j-be the width subimage that in the subimage array, i is listed as, j is capable, u=1,2 ..., W and v=1,2 ..., H-is respectively the horizontal and vertical position coordinates of pixel in subimage.
7. the subimage array is changed into the three-dimensional element image array
Comprise W * H three-dimensional element image in the three-dimensional element image array that the subimage array converts to, the resolution of each three-dimensional element image is M * N, and the expression formula that is arranged in the width three-dimensional element image that three-dimensional element image array p is listed as, q is capable is:
EI p,q(r,t)=SI r,t(p,q)
In formula, EI p,q-be the width three-dimensional element image that in the three-dimensional element image array, p is listed as, q is capable, r=1,2 ..., M and t=1,2 ..., N-is respectively the horizontal and vertical position coordinates of pixel in three-dimensional element image.

Claims (1)

1. a discrete viewpoint collection intercepts the three-dimensional element image array generation method of algorithm in conjunction with window, it is characterized in that comprising the following steps:
1.1 gather discrete visual point image array, comprise the following steps:
1.1.1 initialization: two A-frames of adjusting support stereo track are fixed to a lower position with the height of stereo track, camera is arranged on the stereo track, the stereo track can make camera at the uniform velocity move from right to left, and the experimenter controls the startup of stereo track and stops by remote controller;
1.1.2 obtain discrete viewpoint image group: utilize level gauge that the plane of stereo track is adjusted to parallel with horizontal plane, and with remote controller with mobile camera moving to the low order end of stereo track, pin the camera shutter line and start simultaneously the stereo track, in the moving process of camera, utilize many discrete visual point images of the continuous shooting function continuous acquisition reference object of camera, the discrete visual point image that collects from left to right is in line according to the sequencing of taking, namely obtains the discrete viewpoint image group that collects when the stereo track is positioned at this height;
1.1.3 utilize Vertical surveyors' staff, raise two A-frames that support the stereo track, make fixing distance of stereo track rising;
1.1.4 the shooting process of repeating step 1.1.2 and 1.1.3 obtains a plurality of discrete viewpoint image group;
1.1.5 with the sequencing of all discrete viewpoint image group according to collection, be arranged in from top to bottom a discrete visual point image array, namely, the one group of discrete viewpoint image group that collects at first is placed on the first row of discrete visual point image array, the one group of discrete viewpoint image group that collects at last is placed on last column of discrete visual point image array;
1.2 the position of calculating reference object in every discrete visual point image:
the location parameter of reference object in every discrete visual point image comprises: reference object is the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array, reference object is the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array, reference object upper, under, the left and right lateral boundaries is in the first row of discrete visual point image array, position in the discrete visual point image of first row, reference object upper, under, the left and right lateral boundaries is in last column of discrete visual point image array, position in the discrete visual point image of last row,
1.2.1 determine reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array, its computational methods are: establish DVI I, jRepresent to be positioned in discrete visual point image array the i row, the discrete visual point image that j is capable, at first, with DVI 1,1Individual element ground is to right translation, after each translation, calculates the DVI after translation 1,1With DVI 2,1The Y-PSNR of lap, Y-PSNR is defined as:
PSNR ( s ) = 10 × log 10 [ 255 2 MSE ( s ) ]
In formula, s-is DVI 1,1Translation distance, PSNR (s)-the be DVI after translation 1,1With DVI 2,1The Y-PSNR of lap, MSE (s)-be mean square error, its definition is:
MSE ( s ) = 1 ( X - s ) × Y Σ x = 0 X - 1 - s Σ y = 0 Y - 1 [ DVI 1,1 ( x , y ) - DVI 2,1 ( x + s , y ) ] 2
In formula, x, y-are respectively pixel at DVI 1,1In horizontal and vertical position coordinates, X, Y-are respectively the pixel quantity that the horizontal and vertical direction of discrete visual point image comprises;
Displacement corresponding during then, with the Y-PSNR maximum is as reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array;
1.2.2 determine reference object vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array, its computational methods are: at first with DVI 1,1And DVI 1,2Carry out transposition, then calculate according to the computational methods identical with reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array;
1.2.3 determine that upper and lower, the left and right lateral boundaries of reference object is at DVI 1,1In the position, its computational methods are: at first, calculate DVI 1,1And DVI 2,1Error image and carry out medium filtering; Then, with the position of the upper and lower and left margin of all non-zero points in the error image after medium filtering as the upper and lower and left border of reference object at DVI 1,1In the position, with the position of the right margin of all non-zero points in the error image after medium filtering deduct value that reference object obtains after the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array as the right side boundary of reference object at DVI 1,1In the position;
1.2.4 determine that upper and lower, the left and right lateral boundaries of reference object is in last column of discrete visual point image array, the position in the discrete visual point image of last row, its computational methods are: at first, calculate the error image of the discrete visual point image of last column, row second from the bottom in the discrete visual point image of last column, last row in discrete visual point image array and discrete visual point image array and carry out medium filtering, then, upper with all non-zero points in the error image after medium filtering, the position of lower and right margin is upper as reference object, lower and right side boundary is in last column of discrete visual point image array, position in the discrete visual point image of last row, the position of the left margin of all non-zero points in the error image after medium filtering is added that value that reference object obtains after the horizontal relative displacement in any two horizontal adjacent discrete visual point images of width is as the left border of reference object last column at discrete visual point image array in discrete visual point image array, position in the discrete visual point image of last row,
1.3 calculate intercepting the window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array of intercepting window in discrete visual point image array:
Intercepting the window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and expression formula of intercepting window vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array in discrete visual point image array are:
MH=DH+delta
MV=DV+delta
In formula, MH, MV-are respectively intercepting the window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and the vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array of intercepting window in discrete visual point image array, DH, DV-are respectively the reference object horizontal relative displacement in any two horizontal adjacent discrete visual point images of width and reference object vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array in discrete visual point image array, and delta-is the effect of depth factor;
According to actual needs, delta can carry out any value in allowed limits, and the span of delta is:
delta_max=min[(MH_max-DH),(MV_max-DV)]
delta_min=0
In formula, delta_max, delta_min-are respectively the desirable maximum of delta and minimum value, MH_max, MV_max-are respectively the maximum of intercepting window horizontal relative displacement in any two horizontal adjacent discrete visual point images of width in discrete visual point image array and the maximum of intercepting window vertically opposite displacement in any two vertical adjacent discrete visual point images of width in discrete visual point image array, and the minimum value of all numerical value in bracket is got in min ()-expression;
The expression formula of MH_max and MV_max is:
MH_max=min(X-IR,IL)/(M-1)
MV_max=min(Y-IB,IT)/(N-1)
In formula, IR, IB-are respectively the right side of reference object and downside border at DVI 1,1In the position, IL, IT-are respectively the left side of reference object and boundary in last column of discrete visual point image array, the position in the discrete visual point image of last row, and M, N-are respectively the quantity of the discrete visual point image that in discrete visual point image array, every row and every row comprise;
1.4 calculate the size of intercepting window:
The expression formula of the size of intercepting window is:
W=IR+(M-1)×MH-IL
H=IB+(N-1)×MV-IT
In formula, W, H-are respectively width and the height of intercepting window;
1.5 calculate the position of the lower right corner in the discrete visual point image of the first row, first row of discrete visual point image array of intercepting window:
The lower right corner of intercepting window is at DVI 1,1The expression formula of middle position is:
PH=IR
PV=IB
In formula, PH, PV-are respectively the lower right corner of intercepting window at DVI 1,1In horizontal and vertical position coordinates;
1.6 intercept discrete visual point image array spanning subgraph as array:
With the intercepting window, every discrete visual point image in discrete visual point image array is intercepted spanning subgraph as array, the discrete visual point image quantity that comprises in the subimage quantity that comprises in the subimage array and discrete visual point image array equates, the expression formula that is arranged in subimage array i row, a width subimage that j is capable is:
SI i,j(u,V)=DVI i,j(PH+(i-1)×MH-W+u,PV+(j-1)×MV-H+V)
In formula, SI i,j-be the width subimage that in the subimage array, i is listed as, j is capable, u=1,2 ..., W and v=1,2 ..., H-is respectively the horizontal and vertical position coordinates of pixel in subimage;
1.7 the subimage array is changed into the three-dimensional element image array:
Comprise W * H three-dimensional element image in the three-dimensional element image array that the subimage array converts to, the size of each three-dimensional element image is M pixel * N pixel, and the expression formula that is arranged in the width three-dimensional element image that three-dimensional element image array p is listed as, q is capable is:
EI p,q(r,t)=SI r,t(p,q)
In formula, EI P, q-be the width three-dimensional element image that in the three-dimensional element image array, p is listed as, q is capable, r=1,2 ..., M and t=1,2 ..., N-is respectively the horizontal and vertical position coordinates of pixel in three-dimensional element image.
CN201310051957.4A 2013-02-15 2013-02-15 Method of generating stereo image array of discrete view collection combined window intercept algorithm Active CN103096113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310051957.4A CN103096113B (en) 2013-02-15 2013-02-15 Method of generating stereo image array of discrete view collection combined window intercept algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310051957.4A CN103096113B (en) 2013-02-15 2013-02-15 Method of generating stereo image array of discrete view collection combined window intercept algorithm

Publications (2)

Publication Number Publication Date
CN103096113A true CN103096113A (en) 2013-05-08
CN103096113B CN103096113B (en) 2015-01-07

Family

ID=48208165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310051957.4A Active CN103096113B (en) 2013-02-15 2013-02-15 Method of generating stereo image array of discrete view collection combined window intercept algorithm

Country Status (1)

Country Link
CN (1) CN103096113B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108495111A (en) * 2018-04-11 2018-09-04 吉林大学 A kind of three-dimensional element image array code method based on imaging geometry feature

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070247522A1 (en) * 2003-12-18 2007-10-25 University Of Durham Method and Apparatus for Generating a Stereoscopic Image
CN102065313A (en) * 2010-11-16 2011-05-18 上海大学 Uncalibrated multi-viewpoint image correction method for parallel camera array
CN102223556A (en) * 2011-06-13 2011-10-19 天津大学 Multi-view stereoscopic image parallax free correction method
CN102447934A (en) * 2011-11-02 2012-05-09 吉林大学 Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens
CN102523462A (en) * 2011-12-06 2012-06-27 南开大学 Method and device for rapidly acquiring elemental image array based on camera array

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070247522A1 (en) * 2003-12-18 2007-10-25 University Of Durham Method and Apparatus for Generating a Stereoscopic Image
CN102065313A (en) * 2010-11-16 2011-05-18 上海大学 Uncalibrated multi-viewpoint image correction method for parallel camera array
CN102223556A (en) * 2011-06-13 2011-10-19 天津大学 Multi-view stereoscopic image parallax free correction method
CN102447934A (en) * 2011-11-02 2012-05-09 吉林大学 Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens
CN102523462A (en) * 2011-12-06 2012-06-27 南开大学 Method and device for rapidly acquiring elemental image array based on camera array

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108495111A (en) * 2018-04-11 2018-09-04 吉林大学 A kind of three-dimensional element image array code method based on imaging geometry feature

Also Published As

Publication number Publication date
CN103096113B (en) 2015-01-07

Similar Documents

Publication Publication Date Title
CN102164298B (en) Method for acquiring element image based on stereo matching in panoramic imaging system
KR101629479B1 (en) High density multi-view display system and method based on the active sub-pixel rendering
CN103974055B (en) 3D photo generation system and method
CN103813153B (en) A kind of bore hole 3D multi-view image synthetic method based on weighted sum
CN102209254B (en) One-dimensional integrated imaging method and device
CN102231044A (en) Stereoscopic three-dimensional display based on multi-screen splicing
CN101729920B (en) Method for displaying stereoscopic video with free visual angles
CN102984453A (en) Method and system of real-time generating hemisphere panoramic video images through single camera
CN104063843A (en) Method for generating integrated three-dimensional imaging element images on basis of central projection
CN104635337B (en) The honeycomb fashion lens arra method for designing of stereo-picture display resolution can be improved
CN103297796A (en) Double-vision 3D (three-dimensional) display method based on integrated imaging
CN102447934A (en) Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens
CN104007556A (en) Low crosstalk integrated imaging three-dimensional display method based on microlens array group
CN103021014A (en) Method for increasing reconstruction resolution ratio of computer integrated image
CN103763543B (en) The acquisition method of resultant hologram
CN106683163B (en) Imaging method and system for video monitoring
CN103702099A (en) Ultra-large visual-angle integrated-imaging 3D(Three-Dimensional)displaying method based on head tracking
Hahne et al. Baseline of virtual cameras acquired by a standard plenoptic camera setup
Xing et al. Optical arbitrary-depth refocusing for large-depth scene in integral imaging display based on reprojected parallax image
CN108345108A (en) Head-mounted display apparatus, the generation method of three-dimensional image information and device
CN111158161B (en) Integrated imaging light field information acquisition and display method
CN110430419B (en) Multi-view naked eye three-dimensional image synthesis method based on super-resolution anti-aliasing
Shin et al. Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets
CN102724539B (en) A kind of 3D display packing and display device
CN103220544B (en) Active off-axis parallel type stereo imaging method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant