WO2010095770A1 - Method for automatically adjusting depth of field to visualize stereoscopic image - Google Patents

Method for automatically adjusting depth of field to visualize stereoscopic image Download PDF

Info

Publication number
WO2010095770A1
WO2010095770A1 PCT/KR2009/000876 KR2009000876W WO2010095770A1 WO 2010095770 A1 WO2010095770 A1 WO 2010095770A1 KR 2009000876 W KR2009000876 W KR 2009000876W WO 2010095770 A1 WO2010095770 A1 WO 2010095770A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
field
pixels
stereoscopic image
focal plane
Prior art date
Application number
PCT/KR2009/000876
Other languages
French (fr)
Korean (ko)
Inventor
신병석
강동수
Original Assignee
인하대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인하대학교 산학협력단 filed Critical 인하대학교 산학협력단
Publication of WO2010095770A1 publication Critical patent/WO2010095770A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the present invention relates to a method for adjusting the depth of field of a stereoscopic image according to the focal plane of the user, and more specifically, to the pixel of the stereoscopic image corresponding to the focal plane of the stereoscopic image calculated through the focal length at which the user views the stereoscopic image.
  • the present invention relates to a method for automatically adjusting the depth of field of sampling pixels included in a confusion circle of pixels located within a set number based on the reference.
  • Stereoscopy expression technology is a technology that can be immersed by three-dimensional representation of the scene of the virtual or real world.
  • the stereoscopic image allows the user to feel a stereoscopic feeling from two two-dimensional images by using binocular disparity (binocular disparity) such as the human eye.
  • binocular disparity binocular disparity
  • Such stereoscopic images provide depth-of-focus (depth of the gaze of the stereoscopic image) so that a person can feel a real sense of distance. Therefore, watching stereoscopic images for a long time may cause dizziness or vomiting because the user's focus is continuously forced.
  • the conventional depth-of-field application system of the stereoscopic image has a problem in that it is possible to know only where the user is looking at the screen and cannot calculate which depth of focus the user is looking at in the generated scene.
  • the present invention has been proposed to solve the above-mentioned problems, and calculates a focal plane by calculating a focal length at which a user views a stereoscopic image at a set time interval, and calculates a focal plane corresponding to the calculated focal plane of the stereoscopic image. It is an object of the present invention to provide a method for automatically adjusting the depth of field of sampling pixels included in a confusion circle of pixels located within a set number based on pixels of an image according to the focal length.
  • Another object of the present invention is to adjust the depth of field by sampling a smaller number of pixels as the confusion circles of pixels closer to the focal plane among the confusion circles, and to adjust the depth of field among the confusion circles. It is to provide a control range of the depth of field in the stereoscopic image to adjust the depth of field by sampling a larger number of pixels as the confusion source of the pixels of the confusion circle in the far place.
  • step (a) is
  • step (a-1) More preferably, the step (a-1),
  • step (a) More preferably, before step (a),
  • the method may further include measuring a focal length for each distance between the two eyes of the user so as to be used when calculating the focal length of the user according to the distance between the two eyes in the step (a-2).
  • the distance between the two eyes in the step (a-2) is characterized in that calculated through the following equation.
  • M x is the sum of the x coordinates of the pupil area
  • M y is the sum of the y coordinates of the pupil area
  • P cx and P cy are the center coordinates of the pupil
  • n is the number of pixels having the pupil area value in the search window, respectively. Indicates.
  • step (c) the step (c)
  • step (c) the step (c)
  • a constellation circle including a pixel closest to the pixel corresponding to the focal plane is adjusted for depth of field by sampling six pixels, and a constellation circle including a second pixel closest to the pixel corresponding to the focal plane is 12 pixels.
  • the depth of field is adjusted by sampling, and a confusion circle including a pixel corresponding to the focal plane and a third to set number of pixels is configured to adjust the depth of field by sampling 24 pixels.
  • the focal length is divided into three stages according to the distance.
  • the automatic depth of field adjustment method for visualizing a stereoscopic image includes calculating a focal length at which a user views a stereoscopic image in real time, and a confusion circle of pixels located within a set number of pixels corresponding to the calculated focal plane of the stereoscopic image.
  • FIG. 1 is a block diagram of an automatic depth of field control system for stereoscopic visualization of the present invention.
  • FIG. 2 is a conceptual diagram illustrating an off-axis technique and an on-axis technique.
  • 3 is a conceptual diagram showing the appearance of confusion circle in the image plane of the point B farther than the focal plane.
  • FIG. 4 is a block diagram showing a sampling method for implementing a depth of field in the stereoscopic image of the present invention.
  • Figure 5 is a flow chart of the automatic depth of field control method for stereoscopic visualization of the present invention.
  • Figure 6 is a graph comparing the experimental results of the experimenters for the three-dimensional image of the general three-dimensional image of the present invention.
  • the automatic depth of field control system 100 for stereoscopic visualization of the present invention includes an eye-tracking device 10 and a rendering module 20.
  • the eye tracking device 10 may include an LED light 12, a camera 14, a controller 16, and a calculator 18.
  • the LED light 12 consists of infrared LEDs for irradiating and reflecting electromagnetic waves, such as infrared rays, which do not directly burden the eyes of the user.
  • the camera 14 preferably consists of a pair of infrared cameras to photograph the reflection surfaces of infrared rays reflected from the pupils of both eyes of the user.
  • the LED light 12 is preferably disposed in parallel with the pair of cameras 14 so as to be able to detect the pupil reflection points of the two eyes of the user.
  • the controller 16 distinguishes the pupil and the cornea using a double threshold method at the position of the eyes of the user identified by the camera 14, and uses a labeling method. Pupils can be zoned. Since the double threshold method and the labeling method are already known techniques, detailed description thereof will be omitted.
  • the controller 16 moves the search window by means of a mean shift method to accurately extract the position of the pupil, (after the search window determines the initial position from the infrared reflecting point). Using this, it is possible to extract the center of the pupil of the eyes of the user.
  • the calculation unit 18 calculates the distance between the two eyes using the center position of the pupil through the center position of the pupil of both eyes of the user.
  • the distance between the eyes of the user is calculated through Equation 1 below.
  • M x is the sum of the x coordinates of the pupil area
  • M y is the sum of the y coordinates of the pupil area
  • P cx and P cy are the center coordinates of the pupil
  • n is the number of pixels having the pupil area value in the search window, respectively. Indicates.
  • the calculator 18 may calculate the focal length of the user using the calculated distance between the two eyes.
  • the calculation unit 18 divides the focal length into three steps of distance range values, for example, the farthest, the middle, and the closest parts, to the rendering module.
  • the reason for dividing the focal length into three stages is that it is difficult to divide it into several stages due to the limitation of the resolution (640 x 80) of the camera 14 as a result of experimenting the distance change of the pupil according to the depth of eye. to be.
  • the rendering module 20 automatically adjusts the depth of field of a confusion circle of pixels located within a set number of pixels corresponding to the focal plane of the stereoscopic image calculated by the eye tracking device at a set time interval according to the focal length of the user. .
  • the principle of generating the stereoscopic image by the rendering module 20 is as follows.
  • FIG. 2 is a conceptual diagram illustrating an off-axis technique and an on-axis technique.
  • the off-axis technique is a method of setting the camera 14 such that the left camera and the right camera face the same target point by a predetermined distance e as shown in FIG.
  • the on-axis technique is a method of setting the distance between the cameras 14 by a predetermined distance e and setting the two cameras 14 in parallel as shown in FIG.
  • the two methods are not significantly different, but in the implementation process, the difference in the stereoscopic effect is different depending on how the target point is taken in the off-axis method.
  • the on-axis technique calculates the visual direction of the camera 14 in parallel axes so that the position of the camera 14 can be easily calculated in one parallel movement operation.
  • the effect of stereoscopic images does not change. Therefore, the present invention uses a relatively simple on-axis technique in consideration of the execution speed.
  • the principle of controlling the depth of field of the stereoscopic image by the rendering module 20 is as follows.
  • FIG. 3 is a conceptual diagram illustrating a confusion circle appearing at an image plane of a point B farther from the focal plane.
  • points at focused distances such as point A
  • a ' points at focused distances
  • a ' points at focused distances
  • a ' points at focused distances
  • a ' points at focused distances
  • a ' points at focused distances
  • a ' points at focused distances
  • a ' points at focused distances
  • a ' but larger than one point if closer or farther away, such as point B.
  • a confusion circle a point of an object that corresponds to pixels forming a circle on an image.
  • the rendering module 20 of the present invention adjusts the depth of field of the pixels of the confusion circle.
  • the depth of field of all the pixels of the confusion circle affects the pixels corresponding to the focal plane of the calculated stereoscopic image
  • the depth of focus is adjusted.
  • the points of all locations except the larger the relative distance, the larger the size of the confusion circle has a problem that the amount of calculation is too large. That is, in order to calculate the color of the desired sampling point, the rendering module 20 needs to calculate the weight according to the distance by loading the color of the numerous points around and the calculation amount increases exponentially.
  • the rendering module 20 does not consider the confusion circle around the pixel corresponding to the calculated focal plane of the stereoscopic image, the image quality is deteriorated and serious color-bleeding occurs in the resultant image.
  • the rendering module 20 of the present invention automatically adjusts the depth of field of the confusion circle of pixels located within a set number of pixels corresponding to the calculated focal plane of the stereoscopic image at set time intervals. That is, the rendering module 20 of the present invention adjusts the depth of field by sampling a smaller number of pixels as pixels closer to the pixel corresponding to the focal plane among the confused circles, and pixels far from the pixel corresponding to the focal plane among the confused circles. As the number of pixels is increased, the depth of field is adjusted by adjusting a depth of field by sampling a larger number of pixels.
  • the rendering module 20 of the present invention is replaced with an appropriately blurred image according to a circle of confusion located within a set number of images to be visualized for both eyes of a user. Since it can be expressed by the human eye, it is possible to create an effect in a three-dimensional image that is seen by the human eye.
  • the first confusion circle including the pixel closest to the pixel corresponding to the focal plane selects six (60 ° interval) pixels as sampling pixels to adjust the depth of field.
  • the pixel closest to the pixel corresponding to the focal plane is a pixel of confusion circle, preferably corresponding to three pixels away from the focal plane.
  • the second confusion circle including the second closest pixel that is, four pixels away from the pixel corresponding to the focal plane, selects 12 pixels (30 ° intervals) as sampling pixels to adjust the depth of field.
  • the third, fourth, and fifth confusion circles, including the eighth pixel from the third pixel closest to the pixel corresponding to the focal plane (5 pixels away from the pixel corresponding to the focal plane).
  • a depth of field is adjusted by selecting a pixel of (15 ° intervals) as the sampling pixel. That is, the sampling pixels of the first to fifth congestion sources are commonly located in the virtual radiation when the virtual radiation is drawn in the direction of the fifth confusion circle in the focal plane.
  • the rendering module 20 may adjust the depth of field of pixels corresponding to the focal plane of the stereoscopic image as effectively as possible while minimizing the deterioration of image quality for real-time stereoscopic image generation. If you sample too many confusion circles, the rendering time will be longer. If you sample too few confusion circles, the rendering time will be shorter but the image quality will be lower. Therefore, the optimal number of confusion circles and the number of sampling pixels are selected. .
  • FIG. 5 is a flowchart illustrating an automatic depth of field adjustment method for stereoscopic visualization of the present invention.
  • the LED lighting 12 of the eye tracking device generates a reflection point by irradiating and reflecting infrared rays to both eyes of a user (S100).
  • the camera 14 of the eye tracking device captures the reflection points at set time intervals to determine the positions of two eyes of the user (S102), and the controller 16 of the eye tracking device detects the pupil and the cornea at the positions of the two eyes. Separate and zone the separated pupil (S104).
  • the control unit 16 of the eye tracking device extracts the center position of the pupil using the search window (S106), and the calculation unit 18 of the eye tracking device uses the center position of the pupil determined at a predetermined time interval.
  • the distance between the two eyes is calculated (S108).
  • the calculation unit 18 of the eye tracking device calculates a focal length of the user using the calculated distance (S110) and transfers it to the rendering module 20.
  • the calculation unit 18 calculates the focal length of the user according to the distance between the two eyes, the focal length for each distance between the two eyes of the user measured in advance is used.
  • the rendering module 20 grasps a confusion circle of pixels located within a set number of pixels corresponding to the focal plane of the stereoscopic image calculated by the calculation unit 18 of the eye tracking device (S112). 20) samples the set number of pixels from the identified confusion circle and adjusts the depth of field according to the focal length (S114). At this time, the rendering module 20 adjusts the depth of field by sampling a smaller number of pixels closer to the pixel corresponding to the focal plane among the confused circles, and increasing the number of pixels farther from the pixel corresponding to the focal plane among the confused circles. Adjust the depth of field by sampling the pixels.
  • the depth of field image quality was compared with different sampling numbers. First, if the depth of field is adjusted only for 12 sampling pixels around the pixel corresponding to the focal plane, the rendering speed exceeds 100 fps on average, but severe color-bleeding occurs like a border around the mesh. It was confirmed that a white circle was formed.
  • Figure 6 is a graph comparing the experimental results of the experimenters for the three-dimensional image according to the present invention.
  • a survey was performed after 10 users experienced the system proposed in the general 3D image and the invention, respectively.
  • most of the experimenters felt more dizziness when not using the system of the present invention, and because of this, when using the method of the present invention, more immersive feelings than general stereoscopic images could be felt.

Abstract

The present invention relates to a method for adjusting the depth of field of a stereoscopic image according to the focal plane of a user. More specifically, the present invention comprises the following steps of: (a) finding the distance between the pupils of a user by a setup time interval to calculate a focal distance with which a user sees a stereoscopic image; (b) calculating a focal plane through the calculated focal distance; and (c) automatically adjusting the depth of field of a sampling of pixels, included in a circle of confusion of pixels, which are within a set number around a stereoscopic pixel corresponding to the calculated focal plane. The method for automatically adjusting depth of field to visualize a stereoscopic image according to the present invention calculates a focal distance, with which a user sees a stereoscopic image, by a setup time interval and automatically adjusts the depth of field of pixels of a circle of confusion of pixels, which are within a set number around a pixel corresponding to the focal plane of the calculated stereoscopic image, by a setup time interval, thereby guaranteeing minimum rendering speed of depth of field of a stereoscopic image with minimum distortion.

Description

[규칙 제26조에 의한 보정 28.10.2009] 입체영상 가시화를 위한 자동 피사계심도 조절 방법[Correction 28.10.2009 by Rule 26] Automatic Depth Control Method for Stereoscopic Visualization
본 발명은 사용자의 초점평면에 따라 입체영상의 피사계심도를 조절하는 방법에 관한 것으로서, 보다 구체적으로는 사용자가 입체영상을 보는 초점거리를 통해 계산된 입체영상의 초점평면에 대응하는 입체영상의 픽셀을 기준으로 설정 개수 이내에 위치하는 픽셀의 착란원에 포함된 샘플링 픽셀들의 피사계심도를 자동으로 조절하는 방법에 관한 것이다.The present invention relates to a method for adjusting the depth of field of a stereoscopic image according to the focal plane of the user, and more specifically, to the pixel of the stereoscopic image corresponding to the focal plane of the stereoscopic image calculated through the focal length at which the user views the stereoscopic image. The present invention relates to a method for automatically adjusting the depth of field of sampling pixels included in a confusion circle of pixels located within a set number based on the reference.
입체영상(stereoscopy) 표현기술은 가상 또는 현실 세계의 장면을 입체적으로 표현하여 몰입감을 느낄 수 있게 하는 기술이다. 즉, 입체영상은 인간의 눈과 같은 양안 시차(두 눈 간의 거리)(binocular disparity)를 이용하여 두 장의 2차원 이미지로부터 입체감을 느낄 수 있도록 한다.Stereoscopy expression technology is a technology that can be immersed by three-dimensional representation of the scene of the virtual or real world. In other words, the stereoscopic image allows the user to feel a stereoscopic feeling from two two-dimensional images by using binocular disparity (binocular disparity) such as the human eye.
이러한 입체영상은 사람에게 실제 거리감을 느낄 수 있도록 초점심도(depth-of-focus; 입체영상을 바라보는 시선의 깊이)를 제공한다. 따라서 입체영상을 장시간 시청하면 사용자의 초점을 지속적으로 강제하기 때문에 어지럼증이나 구토를 유발할 수 있다.Such stereoscopic images provide depth-of-focus (depth of the gaze of the stereoscopic image) so that a person can feel a real sense of distance. Therefore, watching stereoscopic images for a long time may cause dizziness or vomiting because the user's focus is continuously forced.
이와 같은 문제점을 해결하기 위한 종래의 기술로는, 마우스와 같은 기기를 이용하여 사용자가 바라보는 위치로 포인터를 이동시켜 실시간으로 입체영상의 피사계심도를 적용하는 시스템이 제안되어 있다.As a conventional technique for solving such a problem, a system for applying a depth of field of a stereoscopic image in real time by moving a pointer to a position viewed by a user using a device such as a mouse has been proposed.
그러나 상기 종래의 입체영상의 피사계심도 적용 시스템은 사용자가 화면의 어느 위치를 보고 있는지만 알 수 있을 뿐 생성된 장면에서 어느 깊이의 초점평면을 바라보고 있는지 계산하지 못하는 문제점이 있다.However, the conventional depth-of-field application system of the stereoscopic image has a problem in that it is possible to know only where the user is looking at the screen and cannot calculate which depth of focus the user is looking at in the generated scene.
본 발명은 기존에 제안된 상기와 같은 문제점을 해결하기 위하여 제안된 것으로서, 설정 시간 간격으로 사용자가 입체영상을 보는 초점거리를 계산하여 초점 평면을 계산하고 계산된 입체영상의 초점평면에 대응하는 입체영상의 픽셀을 기준으로 설정 개수 이내에 위치하는 픽셀의 착란원에 포함된 샘플링 픽셀들의 피사계심도를 상기 초점거리에 따라 자동으로 조절하는 방법을 제공하는 것을 그 목적으로 한다.The present invention has been proposed to solve the above-mentioned problems, and calculates a focal plane by calculating a focal length at which a user views a stereoscopic image at a set time interval, and calculates a focal plane corresponding to the calculated focal plane of the stereoscopic image. It is an object of the present invention to provide a method for automatically adjusting the depth of field of sampling pixels included in a confusion circle of pixels located within a set number based on pixels of an image according to the focal length.
본 발명의 다른 목적은, 상기 착란원들 중 초점평면에 대응하는 픽셀과 가까운 픽셀의 착란원일수록 적은 개수의 픽셀을 샘플링하여 피사계심도를 조절하고, 상기 착란원들 중 초점평면에 대응하는 픽셀과 멀리 위치한 곳의 착란원의 픽셀의 착란원일수록 많은 개수의 픽셀을 샘플링하여 피사계심도를 조절하도록 입체 영상에서 피사계심도의 조절 범위를 제공하는 데 있다.Another object of the present invention is to adjust the depth of field by sampling a smaller number of pixels as the confusion circles of pixels closer to the focal plane among the confusion circles, and to adjust the depth of field among the confusion circles. It is to provide a control range of the depth of field in the stereoscopic image to adjust the depth of field by sampling a larger number of pixels as the confusion source of the pixels of the confusion circle in the far place.
상기한 목적을 달성하기 위한 본 발명의 특징에 따른 입체영상 가시화를 위한 자동 피사계심도 조절 방법은,Automatic depth of field control method for stereoscopic image visualization according to the characteristics of the present invention for achieving the above object,
(a) 사용자의 두 눈의 동공 간의 거리를 설정 시간 간격으로 파악하여 사용자가 입체영상을 보는 초점거리를 계산하는 단계;(a) calculating a focal length at which a user views a stereoscopic image by determining a distance between pupils of two eyes of a user at a predetermined time interval;
(b) 계산된 상기 초점거리를 통해 초점평면을 계산하는 단계; 및(b) calculating a focal plane based on the calculated focal length; And
(c) 계산된 상기 초점평면에 대응하는 입체영상의 픽셀을 기준으로 설정 개수 이내에 위치하는 픽셀의 착란원에 포함된 샘플링 픽셀들의 피사계심도를 자동으로 조절하는 단계를 포함하는 것을 그 구성상의 특징으로 한다.and (c) automatically adjusting the depth of field of sampling pixels included in a confusion circle of pixels located within a set number based on the calculated pixels of the stereoscopic image corresponding to the focal plane. do.
바람직하게는, 상기 단계 (a)는,Preferably, step (a) is
(a-1) 사용자의 두 눈의 동공의 중심 위치를 설정 시간 간격으로 파악하는 단계; 및(a-1) identifying the center positions of the pupils of the two eyes of the user at predetermined time intervals; And
(a-2) 설정 시간 간격으로 파악된 동공의 중심 위치를 이용하여 두 눈 사이의 거리를 계산하고 계산된 거리를 이용하여 사용자의 초점거리를 계산하는 단계를 포함할 수 있다.(a-2) calculating the distance between the two eyes using the center position of the pupil determined at the set time interval and calculating the focal length of the user using the calculated distance.
더욱 바람직하게는, 상기 단계 (a-1)는,More preferably, the step (a-1),
(1) 상기 두 눈에 전자기파를 조사하고 반사시켜 반사점을 통해 상기 두 눈의 위치를 각각 파악하는 단계;(1) irradiating and reflecting electromagnetic waves to the two eyes to determine positions of the two eyes through reflection points;
(2) 파악한 두 눈의 위치에서 동공과 각막을 분리하는 단계;(2) separating the pupil and the cornea at the identified two eye positions;
(3) 분리된 동공을 구역화하는 단계; 및(3) zoning the separated pupils; And
(4) 탐색윈도우를 이용하여 동공의 중심위치를 추출하는 단계를 포함할 수 있다.(4) extracting the central position of the pupil using the search window.
더욱 바람직하게는, 상기 단계 (a) 이전에,More preferably, before step (a),
상기 (a-2) 단계에서 두 눈 사이의 거리에 따른 사용자의 초점거리를 계산할 때 이용할 수 있도록, 사용자의 두 눈 사이의 거리별 초점거리를 측정하는 단계를 더 포함하는 것을 특징으로 한다.The method may further include measuring a focal length for each distance between the two eyes of the user so as to be used when calculating the focal length of the user according to the distance between the two eyes in the step (a-2).
더욱 바람직하게는, 상기 단계 (a-2)에서 두 눈 사이의 거리는 아래의 수학식을 통해 계산되는 것을 특징으로 한다.More preferably, the distance between the two eyes in the step (a-2) is characterized in that calculated through the following equation.
<수학식>Equation
Figure PCTKR2009000876-appb-I000001
Figure PCTKR2009000876-appb-I000001
여기서, Mx는 동공 영역의 x좌표의 합, My는 동공 영역의 y 좌표의 합, Pcx 및 Pcy는 동공의 중심 좌표, n은 탐색윈도우 내의 동공 영역 값을 가지는 픽셀의 수를 각각 나타낸다.Where M x is the sum of the x coordinates of the pupil area, M y is the sum of the y coordinates of the pupil area, P cx and P cy are the center coordinates of the pupil, and n is the number of pixels having the pupil area value in the search window, respectively. Indicates.
바람직하게는, 상기 단계 (c)에서,Preferably, in step (c),
상기 착란원들 중 초점평면에 대응하는 픽셀과 가까운 픽셀일수록 적은 개수의 샘플링 픽셀들의 피사계심도를 조절하고, 상기 착란원들 중 초점평면에 대응하는 픽셀과 먼 픽셀일수록 많은 개수의 샘플링 픽셀들의 피사계심도를 조절하는 것을 특징으로 한다.The closer to the pixel corresponding to the focal plane among the confusion circles, the depth of field of the smaller number of sampling pixels is adjusted, and the farther from the pixel corresponding to the focal plane among the confusion circles, the depth of field of the larger number of sampling pixels is adjusted. It characterized in that to adjust.
바람직하게는, 상기 단계 (c)에서,Preferably, in step (c),
상기 초점평면에 대응하는 픽셀과 제일 가까운 픽셀을 포함한 착란원은 6개의 픽셀을 샘플링하여 피사계심도를 조절하고, 상기 초점평면에 대응하는 픽셀과 두 번째로 가까운 픽셀을 포함한 착란원은 12개의 픽셀을 샘플링하여 피사계심도를 조절하며, 상기 초점평면에 대응하는 픽셀과 세 번째부터 설정 개수까지의 픽셀을 포함한 착란원은 24개의 픽셀을 샘플링하여 피사계심도를 조절하는 것을 특징으로 한다.A constellation circle including a pixel closest to the pixel corresponding to the focal plane is adjusted for depth of field by sampling six pixels, and a constellation circle including a second pixel closest to the pixel corresponding to the focal plane is 12 pixels. The depth of field is adjusted by sampling, and a confusion circle including a pixel corresponding to the focal plane and a third to set number of pixels is configured to adjust the depth of field by sampling 24 pixels.
더욱 바람직하게는, 상기 초점거리는 거리에 따라 3단계로 구분하는 것을 특징으로 한다.More preferably, the focal length is divided into three stages according to the distance.
본 발명에 따른 입체영상 가시화를 위한 자동 피사계심도 조절 방법은, 사용자가 입체영상을 보는 초점거리를 실시간으로 계산하고 계산된 입체영상의 초점평면에 대응하는 픽셀에서 설정 개수 이내에 위치하는 픽셀의 착란원의 샘플링 픽셀들의 피사계심도를 설정 시간 간격으로 자동 조절함으로써, 최소한의 왜곡을 가지면서도 입체영상의 피사계심도의 최소 렌더링 속도의 보장을 가능하게 한다.The automatic depth of field adjustment method for visualizing a stereoscopic image according to the present invention includes calculating a focal length at which a user views a stereoscopic image in real time, and a confusion circle of pixels located within a set number of pixels corresponding to the calculated focal plane of the stereoscopic image. By automatically adjusting the depth of field of the sampling pixels at set time intervals, it is possible to ensure the minimum rendering speed of the depth of field of the stereoscopic image with minimal distortion.
도 1은 본 발명의 입체영상 가시화를 위한 자동 피사계심도 조절 시스템의 구성도.1 is a block diagram of an automatic depth of field control system for stereoscopic visualization of the present invention.
도 2는 off-axis기법과 on-axis기법을 나타내는 개념도.2 is a conceptual diagram illustrating an off-axis technique and an on-axis technique.
도 3은 초점평면보다 멀리 있는 점 B의 이미지 평면에 착란원이 나타나는 모습을 나타내는 개념도.3 is a conceptual diagram showing the appearance of confusion circle in the image plane of the point B farther than the focal plane.
도 4는 본 발명의 입체영상에서 피사계심도 구현을 위한 샘플링방법을 나타내는 구성도.4 is a block diagram showing a sampling method for implementing a depth of field in the stereoscopic image of the present invention.
도 5는 본 발명의 입체영상 가시화를 위한 자동 피사계심도 조절 방법의 순서도.Figure 5 is a flow chart of the automatic depth of field control method for stereoscopic visualization of the present invention.
도 6은 일반적인 입체영상과 본 발명의 입체영상에 대한 실험자들의 체험 결과를 비교한 그래프.Figure 6 is a graph comparing the experimental results of the experimenters for the three-dimensional image of the general three-dimensional image of the present invention.
<도면 중 주요 부분에 대한 부호의 설명><Explanation of symbols for main parts of the drawings>
10: 트랙킹 장치10: tracking device
12: LED 조명12: LED lights
14: 카메라14: camera
16: 제어부16: control unit
18: 계산부18: calculation unit
이하에서는 첨부된 도면들을 참조하여, 본 발명에 따른 실시예에 대하여 상세하게 설명하기로 한다.Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 입체영상 가시화를 위한 자동 피사계심도 조절 시스템의 구성도이다. 도 1에 도시된 바와 같이, 본 발명의 입체영상 가시화를 위한 자동 피사계심도 조절 시스템(100)은, 아이 트랙킹(eye-tracking) 장치(10) 및 렌더링 모듈(20)을 포함한다.1 is a block diagram of an automatic depth of field control system for stereoscopic visualization of the present invention. As shown in FIG. 1, the automatic depth of field control system 100 for stereoscopic visualization of the present invention includes an eye-tracking device 10 and a rendering module 20.
아이 트랙킹 장치(10)는 LED 조명(12), 카메라(14), 제어부(16) 및 계산부(18)를 포함할 수 있다.The eye tracking device 10 may include an LED light 12, a camera 14, a controller 16, and a calculator 18.
LED 조명(12)은 사용자의 두 눈에 전자기파, 예컨대 사용자의 눈에 직접적인 부담을 주지 않는 적외선을 조사하고 반사시키도록 적외선 LED로 이루어진다. 카메라(14)는 바람직하게는 한 쌍의 적외선 카메라로 이루어져 사용자의 두 눈의 동공에서 적외선이 반사되는 반사면을 각각 촬영한다. 여기서, LED 조명(12)은 사용자의 두 눈의 동공 반사점을 잘 검출할 수 있도록 상기 한 쌍의 카메라(14)와 평행하게 배치되는 것이 바람직하다.The LED light 12 consists of infrared LEDs for irradiating and reflecting electromagnetic waves, such as infrared rays, which do not directly burden the eyes of the user. The camera 14 preferably consists of a pair of infrared cameras to photograph the reflection surfaces of infrared rays reflected from the pupils of both eyes of the user. Here, the LED light 12 is preferably disposed in parallel with the pair of cameras 14 so as to be able to detect the pupil reflection points of the two eyes of the user.
본 발명의 일 실시예에서, 제어부(16)는 상기 카메라(14)에 의해 파악된 사용자의 두 눈의 위치에서 이중 임계치 방법을 이용하여 동공과 각막을 구분하고, 라벨링(labeling) 방법을 이용하여 동공을 구역화할 수 있다. 상기 이중 임계치 방법 및 라벨링(labeling) 방법은 이미 공지된 기술이므로 상세한 설명은 생략한다.In an embodiment of the present invention, the controller 16 distinguishes the pupil and the cornea using a double threshold method at the position of the eyes of the user identified by the camera 14, and uses a labeling method. Pupils can be zoned. Since the double threshold method and the labeling method are already known techniques, detailed description thereof will be omitted.
또한, 본 발명의 일 실시예에서, 제어부(16)는 정확하게 동공의 위치를 추출하기 위해 민 시프트(mean shift) 방법을 통해 탐색 윈도우를 이동시키며, (탐색 윈도우가 적외선 반사점으로부터 처음 위치를 결정한 후, 이를 이용하여) 사용자의 두 눈의 동공의 중심을 추출할 수 있다.In addition, in one embodiment of the present invention, the controller 16 moves the search window by means of a mean shift method to accurately extract the position of the pupil, (after the search window determines the initial position from the infrared reflecting point). Using this, it is possible to extract the center of the pupil of the eyes of the user.
계산부(18)는 사용자의 두 눈의 동공의 중심 위치를 통해 동공의 중심 위치를 이용하여 두 눈 사이의 거리를 계산한다. 상기 사용자의 두 눈 사이의 거리는 아래의 수학식 1을 통해 계산된다.The calculation unit 18 calculates the distance between the two eyes using the center position of the pupil through the center position of the pupil of both eyes of the user. The distance between the eyes of the user is calculated through Equation 1 below.
수학식 1
Figure PCTKR2009000876-appb-M000001
Equation 1
Figure PCTKR2009000876-appb-M000001
여기서, Mx는 동공 영역의 x좌표의 합, My는 동공 영역의 y 좌표의 합, Pcx 및 Pcy는 동공의 중심 좌표, n은 탐색윈도우 내의 동공 영역 값을 가지는 픽셀의 수를 각각 나타낸다.Where M x is the sum of the x coordinates of the pupil area, M y is the sum of the y coordinates of the pupil area, P cx and P cy are the center coordinates of the pupil, and n is the number of pixels having the pupil area value in the search window, respectively. Indicates.
본 발명에서는 사용자의 두 눈 사이의 거리별 초점거리를 미리 측정함으로써, 사용자의 두 눈 사이의 거리만 계산하면 사용자의 입체영상에 대한 초점거리를 미리 인지할 수 있도록 한다. 이에 따라, 상기 계산부(18)는 계산된 두 눈 사이의 거리를 이용하여 사용자의 초점거리를 계산할 수 있다.In the present invention, by measuring the focal length for each distance between the eyes of the user in advance, it is possible to recognize the focal length for the stereoscopic image of the user in advance by calculating only the distance between the eyes of the user. Accordingly, the calculator 18 may calculate the focal length of the user using the calculated distance between the two eyes.
한편, 상기 계산부(18)는 상기 초점거리를 총 3단계의 거리 범위 값, 예컨대 가장 멀리 있는 곳, 중간 부분, 그리고 가장 가까운 부분의 3단계로 구분하여 렌더링 모듈로 전달한다. 이렇게 초점거리를 3단계로 나눈 이유는, 시선의 심도에 따른 동공의 거리 변화를 실험해 본 결과 카메라(14)의 해상도(640 x 80)의 제한으로 인해 그 이상의 여러 단계로 나누는 것이 곤란하기 때문이다.On the other hand, the calculation unit 18 divides the focal length into three steps of distance range values, for example, the farthest, the middle, and the closest parts, to the rendering module. The reason for dividing the focal length into three stages is that it is difficult to divide it into several stages due to the limitation of the resolution (640 x 80) of the camera 14 as a result of experimenting the distance change of the pupil according to the depth of eye. to be.
렌더링 모듈(20)은 상기 아이 트랙킹 장치로부터 계산된 입체영상의 초점평면에 대응하는 픽셀에서 설정 개수 이내에 위치하는 픽셀의 착란원의 피사계심도를 상기 사용자의 초점거리에 따라 설정 시간 간격으로 자동 조절한다.The rendering module 20 automatically adjusts the depth of field of a confusion circle of pixels located within a set number of pixels corresponding to the focal plane of the stereoscopic image calculated by the eye tracking device at a set time interval according to the focal length of the user. .
여기서, 상기 렌더링 모듈(20)이 입체영상을 생성하는 원리를 살펴보면 다음과 같다.Here, the principle of generating the stereoscopic image by the rendering module 20 is as follows.
도 2는 off-axis 기법과 on-axis 기법을 나타내는 개념도이다. 도 2에 도시된 바와 같이, 일반적으로 입체영상을 생성하는 기법에는 off-axis 기법과 on-axis 기법이 있다. 여기서, off-axis 기법은 도 2의 (a)와 같이 왼쪽 카메라와 오른쪽 카메라가 일정거리 e만큼 떨어져서 동일한 타깃 지점을 향하도록 카메라(14)를 설정하는 방법이다. 이에 비하여, on-axis 기법은 도 2의 (b)와 같이 카메라(14) 간의 거리를 일정간격 e만큼 떨어뜨리고, 두 대의 카메라(14) 방향이 평행하도록 설정하는 방법이다.2 is a conceptual diagram illustrating an off-axis technique and an on-axis technique. As shown in FIG. 2, generally, there are an off-axis technique and an on-axis technique for generating a stereoscopic image. Here, the off-axis technique is a method of setting the camera 14 such that the left camera and the right camera face the same target point by a predetermined distance e as shown in FIG. On the other hand, the on-axis technique is a method of setting the distance between the cameras 14 by a predetermined distance e and setting the two cameras 14 in parallel as shown in FIG.
입체영상의 가시화 부분에서 두 가지 방법이 큰 차이는 없지만 구현과정에서 off-axis 기법의 경우 타깃 지점을 어떻게 잡느냐에 따라 입체감의 차이가 다르게 나타난다. 이에 비하여, on-axis 기법은 평행한 축으로 카메라(14)의 시선방향을 계산하기 때문에 한 번의 평행이동 연산으로 카메라(14)의 위치를 쉽게 계산할 수 있고, 카메라(14) 위치 및 방향에 따라 입체영상의 효과가 달라지지 않는다. 그러므로 본 발명에서는 수행속도를 고려하여 연산이 비교적 단순한 on-axis 기법을 사용하였다.In the visualization part of the stereoscopic image, the two methods are not significantly different, but in the implementation process, the difference in the stereoscopic effect is different depending on how the target point is taken in the off-axis method. On the other hand, the on-axis technique calculates the visual direction of the camera 14 in parallel axes so that the position of the camera 14 can be easily calculated in one parallel movement operation. The effect of stereoscopic images does not change. Therefore, the present invention uses a relatively simple on-axis technique in consideration of the execution speed.
한편 본 발명에서 렌더링 모듈(20)이 입체영상의 피사계심도를 조절하는 원리를 살펴보면 다음과 같다.Meanwhile, the principle of controlling the depth of field of the stereoscopic image by the rendering module 20 is as follows.
일반적으로 핀 홀 카메라(14)처럼 조리개의 크기가 매우 작은 경우, 3차원의 모든 점들은 이미지 평면에 하나의 점으로 일대일 대응된다. 하지만 사람 눈의 수정체처럼 빛을 받아들이는 조리개의 크기가 커질수록 3차원의 점들은 일대일로 대응되지 않는다.In general, when the aperture is very small, such as the pinhole camera 14, all three-dimensional points correspond one-to-one to one point in the image plane. However, the larger the size of the aperture that accepts light like the lens of the human eye, the three-dimensional points do not correspond one-to-one.
도 3은 초점평면보다 멀리 있는 점 B의 이미지 평면에 착란원이 나타나는 모습을 나타내는 개념도이다. 도 3에 도시된 바와 같이, 점 A와 같이 초점이 맞는 거리에 있는 점들은 A'와 같이 일 대 일로 이미지 평면에 맵핑되지만, 점 B와 같이 그보다 가깝거나 더 먼 경우에는 한 점보다는 더 큰 크기의 원(B')으로 맵핑되는 것을 볼 수 있다. 이처럼 물체의 한 점이 이미지상의 원을 이루는 픽셀들로 대응된 것을 착란원이라고 한다.3 is a conceptual diagram illustrating a confusion circle appearing at an image plane of a point B farther from the focal plane. As shown in FIG. 3, points at focused distances, such as point A, are mapped to the image plane one-to-one, such as A ', but larger than one point if closer or farther away, such as point B. You can see that it is mapped to the circle (B '). Thus, a point of an object that corresponds to pixels forming a circle on an image is called a confusion circle.
본 발명의 렌더링 모듈(20)은 착란원의 픽셀의 피사계심도를 조절하게 되는데, 계산된 입체영상의 초점평면에 대응하는 픽셀에 영향을 주는 모든 착란원의 픽셀의 피사계심도를 조절할 경우 초점평면을 제외한 모든 위치의 점들은 상대적인 거리가 멀어질수록 착란원의 크기가 커지므로 그 연산량이 너무 많아진다는 문제점이 있다. 즉, 렌더링 모듈(20)이 원하는 샘플링 점의 색상을 계산하기 위해서는 주변의 수많은 점들의 색상을 로드하여 거리에 따른 가중치를 계산해야하기 때문에 연산량이 기하급수적으로 늘어나게 된다.The rendering module 20 of the present invention adjusts the depth of field of the pixels of the confusion circle. When the depth of field of all the pixels of the confusion circle affects the pixels corresponding to the focal plane of the calculated stereoscopic image, the depth of focus is adjusted. The points of all locations except the larger the relative distance, the larger the size of the confusion circle has a problem that the amount of calculation is too large. That is, in order to calculate the color of the desired sampling point, the rendering module 20 needs to calculate the weight according to the distance by loading the color of the numerous points around and the calculation amount increases exponentially.
반대로 렌더링 모듈(20)이 계산된 입체영상의 초점평면에 대응하는 픽셀 주변의 착란원을 고려하지 않을 경우, 화질이 떨어지고 결과 영상에 심각한 color-bleeding이 발생한다.On the contrary, when the rendering module 20 does not consider the confusion circle around the pixel corresponding to the calculated focal plane of the stereoscopic image, the image quality is deteriorated and serious color-bleeding occurs in the resultant image.
따라서 본 발명의 렌더링 모듈(20)은, 계산된 입체영상의 초점평면에 대응하는 픽셀에서 설정 개수 이내에 위치하는 픽셀의 착란원의 피사계심도를 설정 시간 간격으로 자동 조절한다. 즉 본 발명의 렌더링 모듈(20)은 착란원들 중 초점평면에 대응하는 픽셀과 가까운 픽셀일수록 적은 개수의 픽셀을 샘플링하여 피사계심도를 조절하고, 착란원들 중 초점평면에 대응하는 픽셀과 먼 픽셀일수록 많은 개수의 픽셀을 샘플링하여 피사계심도를 조절하는 방식으로, 피사계심도를 조절하는 착란원의 수를 조정한다.Therefore, the rendering module 20 of the present invention automatically adjusts the depth of field of the confusion circle of pixels located within a set number of pixels corresponding to the calculated focal plane of the stereoscopic image at set time intervals. That is, the rendering module 20 of the present invention adjusts the depth of field by sampling a smaller number of pixels as pixels closer to the pixel corresponding to the focal plane among the confused circles, and pixels far from the pixel corresponding to the focal plane among the confused circles. As the number of pixels is increased, the depth of field is adjusted by adjusting a depth of field by sampling a larger number of pixels.
이러한 원리에 따라, 본 발명의 렌더링 모듈(20)은 사용자의 두 눈에 대해서 가시화할 영상에 대해 설정 개수 이내에 위치하는 착란원(circle of confusion)에 따라 적절히 블러링된 영상(blurred image)으로 대체하여 표현할 수 있기 때문에 실제 사람이 눈으로 보는 것과 같은 효과를 3차원 영상에서 생성할 수 있다.According to this principle, the rendering module 20 of the present invention is replaced with an appropriately blurred image according to a circle of confusion located within a set number of images to be visualized for both eyes of a user. Since it can be expressed by the human eye, it is possible to create an effect in a three-dimensional image that is seen by the human eye.
도 4는 본 발명의 입체영상에서 피사계심도 구현을 위한 샘플링 방법을 나타내는 구성도이다. 도 4에 도시된 바와 같이, 예를 들어 초점평면에 대응하는 픽셀에서 제일 가까운 픽셀을 포함한 제1 착란원은 6개(60°간격)의 픽셀을 샘플링 픽셀로서 선택하여 피사계심도를 조절한다. 여기서 초점평면에 대응하는 픽셀에서 제일 가까운 픽셀은 상기 초점평면에서 바람직하게는 3개만큼 떨어진 픽셀에 해당하는 착란원의 픽셀이다.4 is a block diagram illustrating a sampling method for implementing depth of field in a stereoscopic image of the present invention. As shown in Fig. 4, for example, the first confusion circle including the pixel closest to the pixel corresponding to the focal plane selects six (60 ° interval) pixels as sampling pixels to adjust the depth of field. The pixel closest to the pixel corresponding to the focal plane is a pixel of confusion circle, preferably corresponding to three pixels away from the focal plane.
그리고 초점평면에 대응하는 픽셀에서 두 번째 가까운 픽셀 즉 4개만큼 떨어진 픽셀을 포함한 제2 착란원은 12개(30°간격)의 픽셀을 샘플링 픽셀로서 선택하여 피사계심도를 조절한다.The second confusion circle including the second closest pixel, that is, four pixels away from the pixel corresponding to the focal plane, selects 12 pixels (30 ° intervals) as sampling pixels to adjust the depth of field.
그리고 초점평면에 대응하는 픽셀에서 세 번째 가까운 픽셀(초점평면에 대응하는 픽셀에서 5개만큼 떨어진 픽셀)부터 설정 개수, 예컨대 8개째 가까운 픽셀을 포함한 제3, 제4, 제5 착란원은 24개(15°간격)의 픽셀을 샘플링 픽셀로서 선택하여 피사계심도를 조절한다. 즉 제1 내지 제5 착란원의 샘플링 픽셀들은 초점평면에서 제5 착란원 방향으로 가상의 방사선을 그을 경우 가상의 방사선에 공통으로 위치하게 된다.And the third, fourth, and fifth confusion circles, including the eighth pixel, from the third pixel closest to the pixel corresponding to the focal plane (5 pixels away from the pixel corresponding to the focal plane). A depth of field is adjusted by selecting a pixel of (15 ° intervals) as the sampling pixel. That is, the sampling pixels of the first to fifth congestion sources are commonly located in the virtual radiation when the virtual radiation is drawn in the direction of the fifth confusion circle in the focal plane.
이러한 원리에 의하여, 렌더링 모듈(20)은 실시간 입체영상 생성을 위한 화질 저하를 최소화하면서 최대한 효과적으로 입체영상의 초점평면에 대응하는 픽셀들의 피사계심도를 조절할 수 있다. 너무 많은 착란원의 픽셀을 샘플링하면 렌더링 시간이 길어지게 되고 너무 적은 착란원의 픽셀을 샘플링하면 렌더링 시간은 짧아지지만 화질이 떨어지게 되므로, 최적의 착란원의 개수 및 이에 따른 샘플링 픽셀의 개수를 선정한 것이다.By this principle, the rendering module 20 may adjust the depth of field of pixels corresponding to the focal plane of the stereoscopic image as effectively as possible while minimizing the deterioration of image quality for real-time stereoscopic image generation. If you sample too many confusion circles, the rendering time will be longer. If you sample too few confusion circles, the rendering time will be shorter but the image quality will be lower. Therefore, the optimal number of confusion circles and the number of sampling pixels are selected. .
이하 본 발명의 입체영상 가시화를 위한 자동 피사계심도 조절 방법을 설명한다.Hereinafter, an automatic depth of field control method for stereoscopic visualization of the present invention will be described.
도 5는 본 발명의 입체영상 가시화를 위한 자동 피사계심도 조절 방법의 순서도이다. 도 5에 도시된 바와 같이, 먼저 아이 트랙킹 장치의 LED 조명(12)은 사용자의 두 눈에 적외선을 조사하고 반사시켜 반사점을 생성시킨다(S100). 이어서 아이 트랙킹 장치의 카메라(14)는 설정 시간 간격으로 상기 반사점을 촬영하여 사용자의 두 눈의 위치를 파악하며(S102), 아이 트랙킹 장치의 제어부(16)는 파악한 두 눈의 위치에서 동공과 각막을 분리하고 분리된 동공을 구역화한다(S104). 다음으로, 아이 트랙킹 장치의 제어부(16)는 탐색윈도우를 이용하여 동공의 중심위치를 추출하며(S106), 아이 트랙킹 장치의 계산부(18)는 설정 시간 간격으로 파악된 동공의 중심 위치를 이용하여 두 눈 사이의 거리를 계산한다(S108). 이어서 아이 트랙킹 장치의 계산부(18)는 계산된 거리를 이용하여 사용자의 초점거리를 계산하며(S110), 이를 렌더링 모듈(20)로 전달한다. 여기서, 상기 계산부(18)가 두 눈 사이의 거리에 따른 사용자의 초점거리를 계산할 때에는 미리 측정된 사용자의 두 눈 사이의 거리별 초점거리를 이용한다.5 is a flowchart illustrating an automatic depth of field adjustment method for stereoscopic visualization of the present invention. As shown in FIG. 5, first, the LED lighting 12 of the eye tracking device generates a reflection point by irradiating and reflecting infrared rays to both eyes of a user (S100). Subsequently, the camera 14 of the eye tracking device captures the reflection points at set time intervals to determine the positions of two eyes of the user (S102), and the controller 16 of the eye tracking device detects the pupil and the cornea at the positions of the two eyes. Separate and zone the separated pupil (S104). Next, the control unit 16 of the eye tracking device extracts the center position of the pupil using the search window (S106), and the calculation unit 18 of the eye tracking device uses the center position of the pupil determined at a predetermined time interval. The distance between the two eyes is calculated (S108). Subsequently, the calculation unit 18 of the eye tracking device calculates a focal length of the user using the calculated distance (S110) and transfers it to the rendering module 20. Here, when the calculation unit 18 calculates the focal length of the user according to the distance between the two eyes, the focal length for each distance between the two eyes of the user measured in advance is used.
다음으로, 렌더링 모듈(20)은 아이 트랙킹 장치의 계산부(18)에서 계산된 입체영상의 초점평면에 대응하는 픽셀에서 설정 개수 이내에 위치하는 픽셀의 착란원을 파악하며(S112), 렌더링 모듈(20)은 파악한 착란원에서 설정 개수의 픽셀을 샘플링하여 상기 초점거리에 따라 피사계심도를 조절한다(S114). 이때 렌더링 모듈(20)은 착란원들 중 초점평면에 대응하는 픽셀과 가까운 픽셀일수록 적은 개수의 픽셀을 샘플링하여 피사계심도를 조절하고, 착란원들 중 초점평면에 대응하는 픽셀과 먼 픽셀일수록 많은 개수의 픽셀을 샘플링하여 피사계심도를 조절한다.Next, the rendering module 20 grasps a confusion circle of pixels located within a set number of pixels corresponding to the focal plane of the stereoscopic image calculated by the calculation unit 18 of the eye tracking device (S112). 20) samples the set number of pixels from the identified confusion circle and adjusts the depth of field according to the focal length (S114). At this time, the rendering module 20 adjusts the depth of field by sampling a smaller number of pixels closer to the pixel corresponding to the focal plane among the confused circles, and increasing the number of pixels farther from the pixel corresponding to the focal plane among the confused circles. Adjust the depth of field by sampling the pixels.
이하에서는 도 6을 참조하여 본 발명의 입체영상 가시화를 위한 자동 피사계심도 조절 방법의 실험결과를 설명한다.Hereinafter, with reference to Figure 6 will be described the experimental results of the automatic depth of field control method for stereoscopic image visualization of the present invention.
nVIDIA GeForce7600GSTM 그래픽카드를 장착한 3.0GHz 인텔 펜티엄 PC를 이용하여 본 발명의 GPU기반의 입체영상의 피사계심도 자동 조절 시스템을 실험하였다.A 3.0 GHz Intel Pentium PC equipped with an nVIDIA GeForce7600GS graphics card was used to experiment with a depth-of-field automatic adjustment system for GPU-based stereoscopic images.
샘플링 개수를 달리하면서 피사계심도 화질을 비교해보았다. 먼저, 초점평면과 대응되는 픽셀의 주변에 있는 12개의 샘플링 픽셀에 대해서만 피사계심도를 조절한 경우, 렌더링 속도가 평균 100fps를 상회하였으나 매쉬 주변에 테두리처럼 심한 color-bleeding이 발생하고, 샘플점의 부족으로 흰색의 원이 생긴 것을 확인할 수 있었다.The depth of field image quality was compared with different sampling numbers. First, if the depth of field is adjusted only for 12 sampling pixels around the pixel corresponding to the focal plane, the rendering speed exceeds 100 fps on average, but severe color-bleeding occurs like a border around the mesh. It was confirmed that a white circle was formed.
한편, 초점평면과 대응되는 픽셀의 주변의 모든 점에 대해 착란원의 피사계심도를 조절했을 경우에는, 초점 심도의 표현이 매우 부드럽게 잘 나타난 것을 확인할 수 있었다. 그러나 모든 점에 대해서 모든 착란원을 고려할 경우 픽셀들의 계산량이 많아지고 픽셀의 명령어 개수의 한계로 인하여 렌더링 모듈(20)의 렌더링 속도가 1fps미만이 되었다.On the other hand, when the depth of field of the confusion circle is adjusted for all points around the pixel corresponding to the focal plane, it was confirmed that the expression of the depth of focus was very smooth. However, considering all confusion circles for all points, the computational amount of pixels increases and the rendering speed of the rendering module 20 is less than 1 fps due to the limitation of the number of instructions of the pixels.
마지막으로, 본 발명에서 제안하는 방법을 이용하여 픽셀의 주변의 설정 개수 범위의 착란원의 샘플링 픽셀에 대해 피사계심도를 조절했을 경우, 평균 30fps정도의 속도를 보장하면서도 우수한 화질의 부드러운 영상이 나타나는 것을 확인할 수 있었다.Lastly, when the depth of field is adjusted for a sampling pixel of a confusion circle in the set number range around the pixel using the method proposed by the present invention, a smooth image of excellent image quality is guaranteed while guaranteeing an average speed of about 30 fps. I could confirm it.
도 6은 일반적인 입체영상과 본 발명에 따른 입체영상에 대한 실험자들의 체험결과를 비교한 그래프이다. 도 6에 도시된 바와 같이, 본 발명에서는 입체영상 시청시의 부작용 저감 정도를 측정하기 위해 10명의 사용자 그룹으로부터 일반 입체영상과 발명에서 제안한 시스템을 각각 체험하게 한 다음 설문조사를 하였다. 그 결과 대부분의 실험자는 본 발명의 시스템을 사용했을 때보다 사용하지 않았을 때 더 많은 어지러움을 느꼈고, 이로 인해 본 발명의 방법을 사용할 경우 일반 입체영상보다 더 큰 몰입감을 느낄 수 있다고 답하였다.Figure 6 is a graph comparing the experimental results of the experimenters for the three-dimensional image according to the present invention. As shown in FIG. 6, in the present invention, in order to measure the degree of side effects reduction when viewing 3D images, a survey was performed after 10 users experienced the system proposed in the general 3D image and the invention, respectively. As a result, most of the experimenters felt more dizziness when not using the system of the present invention, and because of this, when using the method of the present invention, more immersive feelings than general stereoscopic images could be felt.
이상 설명한 본 발명은 본 발명이 속한 기술분야에서 통상의 지식을 가진 자에 의하여 다양한 변형이나 응용이 가능하며, 본 발명에 따른 기술적 사상의 범위는 아래의 특허청구범위에 의하여 정해져야 할 것이다.The present invention described above may be variously modified or applied by those skilled in the art, and the scope of the technical idea according to the present invention should be defined by the following claims.

Claims (8)

  1. 입체영상 가시화를 위한 자동 피사계심도 조절 방법에 있어서,In the automatic depth of field control method for stereoscopic visualization,
    (a) 사용자의 두 눈의 동공 간의 거리를 설정 시간 간격으로 파악하여 사용자가 입체영상을 보는 초점거리를 계산하는 단계; (a) calculating a focal length at which a user views a stereoscopic image by determining a distance between pupils of two eyes of a user at a predetermined time interval;
    (b) 계산된 상기 초점거리를 통해 초점평면을 계산하는 단계; 및(b) calculating a focal plane based on the calculated focal length; And
    (c) 계산된 상기 초점평면에 대응하는 입체영상의 픽셀을 기준으로 설정 개수 이내에 위치하는 픽셀의 착란원에 포함된 샘플링 픽셀들의 피사계심도를 자동으로 조절하는 단계(c) automatically adjusting the depth of field of sampling pixels included in a confusion circle of pixels located within a set number based on the calculated pixels of the stereoscopic image corresponding to the focal plane;
    를 포함하는 것을 특징으로 하는 입체영상 가시화를 위한 자동 피사계심도 조절 방법.Automatic depth of field control method for stereoscopic image visualization comprising a.
  2. 제1항에 있어서, 상기 단계 (a)는,According to claim 1, wherein step (a) is,
    (a-1) 사용자의 두 눈의 동공의 중심 위치를 설정 시간 간격으로 파악하는 단계; 및(a-1) identifying the center positions of the pupils of the two eyes of the user at predetermined time intervals; And
    (a-2) 설정 시간 간격으로 파악된 동공의 중심 위치를 이용하여 두 눈 사이의 거리를 계산하고 계산된 거리를 이용하여 사용자의 초점거리를 계산하는 단계를 포함하는 것을 특징으로 하는 입체영상 가시화를 위한 자동 피사계심도 조절 방법.(a-2) calculating the distance between the two eyes using the center position of the pupil determined at the set time interval and calculating the focal length of the user by using the calculated distance; Automatic depth of field adjustment method for.
  3. 제2항에 있어서, 상기 단계 (a-1)는,The method of claim 2, wherein step (a-1) comprises:
    (1) 상기 두 눈에 전자기파를 조사하고 반사시켜 반사점을 통해 상기 두 눈의 위치를 각각 파악하는 단계;(1) irradiating and reflecting electromagnetic waves to the two eyes to determine positions of the two eyes through reflection points;
    (2) 파악한 두 눈의 위치에서 동공과 각막을 분리하는 단계;(2) separating the pupil and the cornea at the identified two eye positions;
    (3) 분리된 동공을 구역화하는 단계; 및(3) zoning the separated pupils; And
    (4) 탐색윈도우를 이용하여 동공의 중심위치를 추출하는 단계를 포함하는 것을 특징으로 하는 입체영상 가시화를 위한 자동 피사계심도 조절 방법.And (4) extracting the center position of the pupil using the search window.
  4. 제2항에 있어서, 상기 단계 (a) 이전에,The method of claim 2, wherein before step (a),
    상기 단계 (a-2)에서 두 눈 사이의 거리에 따른 사용자의 초점거리를 계산할 때 이용할 수 있도록, 사용자의 두 눈 사이의 거리별 초점거리를 측정하는 단계를 더 포함하는 것을 특징으로 하는 입체영상 가시화를 위한 자동 피사계심도 조절 방법.And measuring a focal length for each distance between the two eyes of the user so that the step (a-2) can be used to calculate the focal length of the user according to the distance between the two eyes. How to automatically adjust the depth of field for visualization.
  5. 제2항에 있어서,The method of claim 2,
    상기 단계 (a-2)에서 두 눈 간의 거리는 아래의 수학식을 통해 계산되는 것을 특징으로 하는 입체영상 가시화를 위한 자동 피사계심도 조절 방법.In step (a-2), the distance between the two eyes is calculated by the following equation, the automatic depth of field control method for stereoscopic image visualization.
    <수학식>Equation
    Figure PCTKR2009000876-appb-I000002
    Figure PCTKR2009000876-appb-I000002
    여기서, Mx는 동공 영역의 x좌표의 합, My는 동공 영역의 y 좌표의 합, Pcx 및 Pcy는 동공의 중심 좌표, n은 탐색윈도우 내의 동공 영역 값을 가지는 픽셀의 수를 각각 나타낸다.Where M x is the sum of the x coordinates of the pupil area, M y is the sum of the y coordinates of the pupil area, P cx and P cy are the center coordinates of the pupil, and n is the number of pixels having the pupil area value in the search window, respectively. Indicates.
  6. 제1항에 있어서, 상기 단계 (c)에서,The method of claim 1, wherein in step (c),
    상기 착란원들 중 초점평면에 대응하는 픽셀과 가까운 픽셀일수록 적은 개수의 샘플링 픽셀들의 피사계심도를 조절하고, 상기 착란원들 중 초점평면에 대응하는 픽셀과 먼 픽셀일수록 많은 개수의 샘플링 픽셀들의 피사계심도를 조절하는 것을 특징으로 하는 입체영상 가시화를 위한 자동 피사계심도 조절 방법.The closer to the pixel corresponding to the focal plane among the confusion circles, the depth of field of the smaller number of sampling pixels is adjusted, and the farther from the pixel corresponding to the focal plane among the confusion circles, the larger depth of field Automatic depth of field adjustment method for stereoscopic image visualization, characterized in that for adjusting the.
  7. 제1항에 있어서, 상기 단계 (c)에서,The method of claim 1, wherein in step (c),
    상기 초점평면에 대응하는 픽셀과 제일 가까운 픽셀을 포함한 착란원은 6개의 픽셀을 샘플링하여 피사계심도를 조절하고, 상기 초점평면에 대응하는 픽셀과 두 번째로 가까운 픽셀을 포함한 착란원은 12개의 픽셀을 샘플링하여 피사계심도를 조절하며, 상기 초점평면에 대응하는 픽셀과 세 번째부터 설정 개수까지의 픽셀을 포함한 착란원은 24개의 픽셀을 샘플링하여 피사계심도를 조절하는 것을 특징으로 하는 입체영상 가시화를 위한 자동 피사계심도 조절 방법.The constellation circle including the pixel closest to the pixel corresponding to the focal plane is sampled six pixels to adjust the depth of field, and the constellation circle including the second pixel closest to the pixel corresponding to the focal plane is 12 pixels. The depth of field is adjusted by sampling, and a confusion circle including a pixel corresponding to the focal plane and a third to set number of pixels automatically adjusts the depth of field by sampling 24 pixels. How to adjust the depth of field.
  8. 제1항에 있어서,The method of claim 1,
    상기 초점거리는 거리범위에 따라 3단계로 구분하는 것을 특징으로 하는 입체영상 가시화를 위한 자동 피사계심도 조절 방법.The focal length is divided into three steps according to the distance range Automatic depth control method for stereoscopic image visualization, characterized in that.
PCT/KR2009/000876 2009-02-19 2009-02-24 Method for automatically adjusting depth of field to visualize stereoscopic image WO2010095770A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020090013835A KR100956453B1 (en) 2009-02-19 2009-02-19 Automatic depth-of-field control method for stereoscopic display
KR10-2009-0013835 2009-02-19

Publications (1)

Publication Number Publication Date
WO2010095770A1 true WO2010095770A1 (en) 2010-08-26

Family

ID=42281477

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2009/000876 WO2010095770A1 (en) 2009-02-19 2009-02-24 Method for automatically adjusting depth of field to visualize stereoscopic image

Country Status (2)

Country Link
KR (1) KR100956453B1 (en)
WO (1) WO2010095770A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256149A (en) * 2011-07-13 2011-11-23 深圳创维-Rgb电子有限公司 Three-dimensional (3D) display effect regulation method, device and television
WO2016058288A1 (en) * 2014-10-17 2016-04-21 中兴通讯股份有限公司 Depth-of-field rendering method and apparatus
CN113516709A (en) * 2021-07-09 2021-10-19 连云港远洋流体装卸设备有限公司 Flange positioning method based on binocular vision

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101353966B1 (en) * 2012-01-25 2014-01-27 전자부품연구원 Method for reconstructing all focused hologram and apparatus using the same
CN105657394B (en) * 2014-11-14 2018-08-24 东莞宇龙通信科技有限公司 Image pickup method, filming apparatus based on dual camera and mobile terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002197486A (en) * 2000-12-22 2002-07-12 Square Co Ltd Video game machine, its control method, video game program and computer-readable recording medium with the program recorded thereon
KR20040018859A (en) * 2002-08-27 2004-03-04 한국전자통신연구원 Depth of field adjustment apparatus and method of stereo image for reduction of visual fatigue
KR20060134309A (en) * 2005-06-22 2006-12-28 삼성전자주식회사 Method and apparatus for adjusting quality of 3d stereo image using communication channel

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040018858A (en) * 2002-08-27 2004-03-04 한국전자통신연구원 Depth of field adjustment apparatus and method of stereo image for reduction of visual fatigue

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002197486A (en) * 2000-12-22 2002-07-12 Square Co Ltd Video game machine, its control method, video game program and computer-readable recording medium with the program recorded thereon
KR20040018859A (en) * 2002-08-27 2004-03-04 한국전자통신연구원 Depth of field adjustment apparatus and method of stereo image for reduction of visual fatigue
KR20060134309A (en) * 2005-06-22 2006-12-28 삼성전자주식회사 Method and apparatus for adjusting quality of 3d stereo image using communication channel

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102256149A (en) * 2011-07-13 2011-11-23 深圳创维-Rgb电子有限公司 Three-dimensional (3D) display effect regulation method, device and television
WO2016058288A1 (en) * 2014-10-17 2016-04-21 中兴通讯股份有限公司 Depth-of-field rendering method and apparatus
CN113516709A (en) * 2021-07-09 2021-10-19 连云港远洋流体装卸设备有限公司 Flange positioning method based on binocular vision
CN113516709B (en) * 2021-07-09 2023-12-29 连云港远洋流体装卸设备有限公司 Flange positioning method based on binocular vision

Also Published As

Publication number Publication date
KR100956453B1 (en) 2010-05-06

Similar Documents

Publication Publication Date Title
WO2010095770A1 (en) Method for automatically adjusting depth of field to visualize stereoscopic image
WO2016068560A1 (en) Translucent mark, method for synthesis and detection of translucent mark, transparent mark, and method for synthesis and detection of transparent mark
WO2016115872A1 (en) Binocular ar head-mounted display device and information display method thereof
WO2017204571A1 (en) Camera sensing apparatus for obtaining three-dimensional information of object, and virtual golf simulation apparatus using same
WO2012023639A1 (en) Method for counting objects and apparatus using a plurality of sensors
CN108701363B (en) Method, apparatus and system for identifying and tracking objects using multiple cameras
WO2010076988A2 (en) Image data obtaining method and apparatus therefor
CN106600650A (en) Binocular visual sense depth information obtaining method based on deep learning
US9696551B2 (en) Information processing method and electronic device
WO2012060564A1 (en) 3d camera
CN102122075A (en) Estimation system and method based on inter-image mutual crosstalk in projection stereoscope visible area
WO2012046964A9 (en) Stereoscopic image display device for displaying a stereoscopic image by tracing a focused position
CN106454290B (en) A kind of dual camera image processing system and method
JP2013171058A (en) Stereoscopic image processing device, stereoscopic image pickup device, stereoscopic image display device
WO2012002601A1 (en) Method and apparatus for recognizing a person using 3d image information
WO2014003509A1 (en) Apparatus and method for displaying augmented reality
CN110088662B (en) Imaging system and method for generating background image and focusing image
WO2011071313A2 (en) Apparatus and method for extracting a texture image and a depth image
CN106131448B (en) The three-dimensional stereoscopic visual system of brightness of image can be automatically adjusted
WO2021132824A1 (en) Method for displaying three-dimensional image in integrated imaging microscope system, and integrated imaging microscope system for implementing same
WO2012165717A1 (en) Apparatus for generating stereo 3d image using asymmetric, two-camera module and method for same
KR20220045862A (en) Method and apparatus of measuring dynamic crosstalk
WO2010087587A2 (en) Image data obtaining method and apparatus therefor
JP2014135714A (en) Stereoscopic image signal processing device and stereoscopic image capture device
WO2020111389A1 (en) Multi-layered mla structure for correcting refractive index abnormality of user, display panel, and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09840446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09840446

Country of ref document: EP

Kind code of ref document: A1