KR20140055987A - Method of display control for image - Google Patents

Method of display control for image Download PDF

Info

Publication number
KR20140055987A
KR20140055987A KR1020130119628A KR20130119628A KR20140055987A KR 20140055987 A KR20140055987 A KR 20140055987A KR 1020130119628 A KR1020130119628 A KR 1020130119628A KR 20130119628 A KR20130119628 A KR 20130119628A KR 20140055987 A KR20140055987 A KR 20140055987A
Authority
KR
South Korea
Prior art keywords
image
eye
viewer
viewpoint
screen
Prior art date
Application number
KR1020130119628A
Other languages
Korean (ko)
Inventor
김대영
Original Assignee
김대영
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 김대영 filed Critical 김대영
Publication of KR20140055987A publication Critical patent/KR20140055987A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The camera is installed on the screen, the eye of the viewer is photographed, the eye movement is detected, the eyes are moved back and forth to enlarge / reduce the image, the movement of the main viewpoint on the screen is sensed, / Move to the center of the screen and perform a specific function. - Display the key on the image, and select and perform the function. Controls the video display device such as controlling the viewpoint and the viewpoint of projecting the three-dimensional body of the graphic and reproducing and watching the projection video as intended by the viewer.

Description

TECHNICAL FIELD The present invention relates to a display control method,

The present invention relates to a method of controlling an image, and more particularly, to a method of controlling an image of a screen based on a viewpoint and a viewpoint by capturing an eye of a viewer and grasping a viewpoint and a viewpoint of the viewer with eye movement.

Modern technology displays an image on a flat screen, zooms the image on the screen, controls the image on the plane by moving up / down, left / right. A small image arranged in a lattice form on the screen, information of characters arranged in a line, and the like, so that a small image arranged in a grid is moved up and down, left and right to move and magnify a center of interest, Arrange the characters in the form of a line, move them up / down so that the line of interest is at the center of the screen, and select and view specific information of interest. Or a selection key is displayed as an image, a selection-key can be selected by a certain method, and a selection-key is selected to perform a predetermined function.

Generally, the image is controlled by moving the cursor on the screen with the mouse or by touching the screen with the up, down, left, right movement keys of the keyboard, or touch screen method. Keyboard, remote control, etc. - Selects a key to control the image on the screen and controls the device to perform a certain function.

In computer graphics, a stereoscopic object is modeled as a cell of a graphic object connected with a vertex and a vertex, and motion can be implemented by animation or simulation, and a graphic stereoscopic object is projected in a predetermined visual direction at a certain time point to display an image. The projected image is viewed from different angles by changing the viewpoint of the stereoscopic projection and the gaze direction, and the projection image is controlled by inputting the viewpoint and the sight direction.

A small-sized image reproducing apparatus such as a mobile phone controls an image by touching a screen with a finger while viewing an image at a close distance, and an image reproducing apparatus, which is viewed from a distance such as a TV, controls an image and a device through an auxiliary control device such as a remote controller . Or a video monitor of a computer monitor controls an image by inputting a key-button of a keyboard and controlling a cursor of the screen with a mouse.

In order to view the image displayed on the screen of the plane or to view the image displayed by projecting the graphic solid, the viewer may use a keyboard, a mouse, or a touch screen method to change or change the image of the screen, To control the image. Inputting by a keyboard, mouse, or touch screen method is not a direct intention of the viewer. The viewer should be familiar with the method of inputting, inputting by a certain method, and indirectly by transmitting the will It is not. In addition, it is difficult to execute a function if an input tool such as a keyboard, a mouse, or a remote controller is not nearby.

An object of the present invention is to provide an image capturing apparatus and an image capturing apparatus which are provided with an image capturing device on the screen or on the screen for displaying an image, periodically capturing the viewer's eyes with the image capturing apparatus, detecting the position of the eye of the viewer, Knowing the position of the eye, detecting the movement of the eyes, knowing the eye's eyes, and knowing the main point of the point where the eye's eye intersects with the screen, or the point of intersection of the eyes of the two eyes. Controls the enlargement / reduction of the image on the screen by forward and backward movement of the viewpoint, which is the position of the eye, and controls the portion of the image or image on which the viewpoint on the screen is located to be positioned at the center of the screen. Or the viewer watches the function-key on the screen and selects the function-key to perform a predetermined function. Then, in a projection image reproducing apparatus for projecting a three-dimensional object and displaying an image, the projection image is reproduced by performing coordinate transformation (view conversion) so that the viewpoint is shifted to the main viewpoint. The viewer who views the image on the screen moves the eyes according to the intention, and thus controls the image on the screen in accordance with the movement of the eyes.

It is possible to display an image suitable for viewing by displaying the image of the viewer's eyes and displaying the image according to the viewer's request, and the user can select the function by eyes, and indirectly using a tool such as a keyboard, a mouse, The stereoscopic graphics displaying the projected image can be displayed and the stereoscopic image can be viewed and reproduced.

1 is a flowchart showing steps of a pattern recognition process for detecting the position of an eye according to the present invention.
Figure 2 is a flow chart of the detailed processing steps of the eye-tracking step of Figure 1;
Figure 3 is a flow chart of the detailed processing steps of the eye movement tracking step of Figure 1;

In the present invention, a photographing device is provided on a plane of a screen displaying an image to periodically photograph the eyes of the viewer, and detects the position of the eye on the image of the viewer. The image of the viewer is recognized as a one-dimensional pattern to improve the calculation speed of the eye detection.

There is a standard mask image which is photographed at a certain distance and includes eyes. Two one-dimensional image information orthogonal to the eye of the mask image is extracted, and the one-dimensional image information is reduced / . If the distance between the viewer and the photographing device is larger than a certain distance, the mask image is reduced. If the distance between the viewer and the photographing period is smaller than a certain distance, the mask image . Also, one-dimensional image information of two directions is arranged according to the rotation angle in consideration of the rotation of the eye, which is tilted to the left and right and the eye is the axis of the eye. Also, it is possible to confirm the movement of the pupil by providing a plurality of images of the pupils deviated from the center of the eyes as a mask image.

The position of the eye to be processed next does not change much with respect to the position of the eye in the previous processed image since the speed of photographing and processing the image is fast. The eye does not change much in the plane position, the magnification of the image does not change much, and the direction of rotation does not change much. In order to detect the position of the eye for the first time (hereinafter referred to as "eye-tracking"), the left and right eyes are detected by changing the distance between the viewer and the photographing apparatus (changing the magnification of the mask image) After the position of the eye is tracked, the optimal position can be accurately detected (hereinafter referred to as " movement-tracking " of the eye) by tracking only a part around the detected eye position.

As shown in the flowchart of the pattern recognition processing step of detecting the eye position in Fig. 1, when the eye tracking processing is started (10), mask image information of the standard eye is input (20). Dimensional image information of two vertical directions, which is image information on a linear one-dimensional image of the image taken at a predetermined distance (Zo), and is a linear mask image changing at a predetermined rotation interval around the eye Information. The mask information of the left eye and the mask information of the right eye are respectively detected to detect left and right eyes. Or left and right eye images can be detected by symmetrically transforming one mask image. After inputting the mask image information of the eye (20), the image information of the viewer is input (30). The position of the eye is detected 40 by determining the correlation between the inputted image information and the mask image information beam and it is judged whether the eye is detected 50. If not detected, If the eyes are detected (for example), the next viewer's image is input (60), and the position of the eye is tracked by moving and tracking the position of the eye (70). It is judged whether or not the eye is detected 80. If it is not detected, the eye is retraced back to the eye-tracking steps 30, 40 and 50 and if the eye is detected And outputs the position L (x, y, z) and R (x, y, z) of the left and right eyes to the mask information which is deviated from the center of the eye And can make accurate calculations. Where z is the distance between the viewer and the screen (camera), and x and y are the two-dimensional coordinates of the plane of the captured image. The eyes are photographed at predetermined time intervals and the image of the viewer is input 60 to repeat the steps of eye movement tracking 60, 70, 80, and 90 to detect the movement of eyes . In the above, the one-dimensional image information and the mask information are filtered to block the low frequency components and calculate the correlation to detect the eye. By performing the filter processing, the correlation value is more clearly displayed, and the eye can be clearly detected.

FIG. 2 is a detailed view of the eye-tracking step 40 of FIG. 1. First, the distance z between the viewer and the screen (photographing device) is set to a minimum distance z = Zmin (41) Eye) position (42) (43). The horizontal direction of the image taken by the viewer is set as the x-axis direction, and the vertical direction is set as the y-axis direction. The optimum correlation with the mask information parallel to the y-axis is computed and tracked while changing from the left side to the right side of the image while increasing the x value at regular intervals in the x-scan y- And traces a point L (xn, yn, z) which is predicted to have a large correlation. Where n is a finite number of arrays. and yn is the position of y where the correlation with the mask image information in the y-axis direction at x = xn is the maximum. Next, in the x-detection step of (43), the correlation between the predicted y value for n = 0, 1, 2, 3, ..., n and 1-dimensional mask information parallel to the x- To obtain L (x, y, z) having the maximum correlation. (Right) eye is detected at a certain distance from L (x, y, z) at the R-detection step of the R (45) x, y, and z are determined to detect left and right eyes, and if eyes are detected (Yes), the process shifts to move-tracking (No) (42), (43), (44), (45), and (46) are repeated until the distance z is increased to a predetermined value (41). If it is determined that eyes are not detected (step 46), it is estimated that the eye can not be tracked if the distance z from the viewer to the viewer is equal to or larger than the maximum value of the distance between the viewer with Zmax and the shooting period ) Since the eye has not been detected after the eye has been traced, the eye is tracked by repeating steps (30), (40), and (50) in FIG.

FIG. 3 is a detailed view of the eye movement tracking 70 of FIG. 1 in which the position L (x, y, z) = (XL, YL, ZL) (theta) = (YR-YR), if the rotation in the x-axis of the left and right eyes in the plane of the image of (x, y, z) = (XR, YL) / (XR-XL), and in the optimum rotation calculation step of one eye (left eye) of (72) Information is obtained and the rotation at which the correlation becomes the maximum is known and the optimum correlation is calculated for the distance before and after the distance z between the viewer and the screen in the optimum z calculation step of one eye (left eye) of (73) And finds the optimal position L (x, y, z). Similarly, R (x, y, z) is the optimal position of the right eye in terms of rotation (74) and distance (75), which show the best correlation with rotation for R (x, y, z). In other words, the magnitude of the mask with respect to the distance z can be enlarged / reduced to detect the value of z which is the most consistent, so that the distance to the viewer can be predicted, and the position of the eye can be detected from the captured image. In this way, it is possible to calculate the correlation of the image information of two directions of one-dimensional vertical direction, thereby making it possible to speed up the movement and tracking of the eyes, and it is possible to repeatedly track the position of the eyes.

If the focal length of the camera is fc and the distance between the viewer's eye and the camera (or screen) is z, the magnification of the camera is fc / z because z is sufficiently larger than fc. If the eye movement of the image of the photographing apparatus photographed by the viewer is di, the movement Ci on the view plane (plane perpendicular to the line of sight) of the viewer's eye is di * z / fc. Where di and Ci are vector quantities on a plane perpendicular to the eye's line of sight, and * (asterisk) is a multiplication sign. The suffix i of each symbol separates left and right eyes and means left (L) and right (R).

When the viewer's eyes move and the viewpoint moves back and forth, the distance between the viewer and the screen (photographing device) changes and is detected as a change in z. If the viewer sees the image close to the screen, the viewer tries to view the image of the screen more closely. If the distance between the image and the eye decreases, the image is enlarged. On the contrary, if the viewer moves away from the screen , The viewer wants to view the image with a wider view, and when the distance between the image and the viewer's eyes increases, the image of the screen is reduced when the image magnification of the camera decreases. If the image on the screen reaches a certain magnification and is suitable for viewing, the viewer will fix the viewpoint of the eye and maintain the size of the image. In other words, you can zoom in or out without moving your eyes a lot, and you can see the image as you intend or broaden your view.

To view the image of the plane, the viewer moves the eyes to change the gaze to move the main viewpoint, and then rotates the eyes around the neck, and the pupils move to the center to change the gaze to move the main viewpoint. If you change your eyes by moving your eyes and you move your eyes around the neck or the depth of the retina of your eyes and your eye's radius of turn is s, the movement Pi at the top of the screen is di * (z / fc) * ((z + s) / s). Since z is greater than s, we can replace (z + s) with z. First, the viewer gazes at the center of the screen, the main viewpoint is the center of the screen, and the viewer's eyes at a certain distance from the screen are the reference of the viewpoint of the eyes. If a viewer who views a plane image moves his or her eyes and moves the center of the screen from the center of the screen to the right, the image of the main viewpoint is moved to the center, and if the viewpoint is rightward, The image of the main viewpoint comes to the center. When the viewpoint moves up and down, the image moves in the opposite direction and moves around the image at the main viewpoint. That is, if the center of the image is centered and the center of the image is centered, the image does not move any more, and the viewer moves forward and backward to enlarge / reduce the image, and the viewer can view the image as intended.

Or a portion of a certain image or image can be moved to a predetermined position on the screen and enlarged / reduced. If there is a display window larger than the screen including the screen, and a small image can be arranged vertically, horizontally, or horizontally, or if a character string is displayed and moved leftward or rightward or upward or downward, The display window can be moved and displayed on the screen. That is, there is a viewpoint on the upper side of the screen, and the image moves downward. Also, there is a viewpoint on the left side of the screen, and the image moves to the right. The right side has a viewpoint, and the image moves to the left. Similarly, if you arrange a multi-level string up and down, you can move the string by placing the watchpoint on the top or bottom of the screen. Moving around the main viewpoint, close the viewpoint to the screen, enlarge the small image to check the image, move the main viewpoint, and move the image screen again.

Or if the viewer's point of interest is on top of the image of the select-key. Select - key is a function - key that displays a specific function. When selected in a certain way, it performs a specific function. Or the selection key at the periphery of the screen, that is, the center of the selection-key moves to the center, the center of the viewpoint moves to the center, and the selection-key image continues to be centered, have. In this manner, a specific image displayed on the screen can be controlled and selected, and if a specific image is a function-key performing a specific function, the function-key can be selected to perform a specific function. Or a specific function is displayed on the outside of the screen, and the watchpoint looks at the outside direction to display an image symbolizing a specific function on the screen, and can watch and select a function and perform a specific function. A pointer is displayed on the screen by looking at a specific position, a plurality of function keys to be selected by the pointer are displayed on the screen, a pointer is moved by looking at the pointer, a pointer is moved to a certain function key, Key, and select the function-key to perform the function of the specific function-key.

In a monitor of a video reproducing apparatus that is viewed from the hand, such as a mobile phone, or a video reproducing apparatus that is watched while working adjacent to a screen, such as a computer, since the magnification of the video captured by the viewer is large, the motion of the pupil can be read The movement of the pupil can be detected by the motion of the pupil. However, in a video reproducing apparatus that is viewed from a distance such as a TV, the distance between the viewer and the screen (photographing machine) is long and the image magnification of the camera is small and the movement of the pupil can not be detected . On the other hand, as a viewer watching a video, a person generally moves his / her eyes from the center of the eye to rotate his / her face around his / her neck to rotate his / her eyes. When you move your eyes around your neck and gaze at a certain point in the main point, your pupil will be in the center of your eye and stabilize. That is, it is possible to know the main point of view by the final movement of the eyes. In the case of the movement Ci on the plane of the viewpoint of the eye, the movement Pi of the main viewpoint is Ci * (z / s) and di * (z / fc) * (z / s). When the viewer gazes at the center of the screen and gazes at the corner of the screen while watching the image, the eyes of the two eyes rotate first, and the gaze moves to watch the corner. The eye moves to the corner. The eye moves to the center. When the eye is fixed and looking at the corner of the screen, the pupil is positioned at the center of the eye and stabilized. The viewer will watch the image of the TV while watching the center of the screen, the position of the eye is fixed, and the eye is in a fixed position in the image captured by the camera. The eyes remain unchanged and stable. If you look at the center of the screen and watch the image while watching the corner of the screen, you can detect the movement of the eye in the captured image. If the watchpoint moves and watches a certain point, the control unit of the digital TV including the computing device recognizes the pattern of eyes in the captured image and calculates the main viewpoint, and the control unit of the TV knows the movement of the eye point The control unit can display a function-key of a specific image at a certain point on the screen. If the function-key is selected, the control unit can perform a predetermined function. Moves the main viewpoint on the screen to display the image of the function-key, enlarges / reduces / moves the image of the function-key, selects a function by selecting a function to reach a predetermined position, can do.

However, when viewing a stereoscopic object in front, the viewpoint of the object does not change much to see the side of the object, and the viewer's body or face moves, that is, the eye on the face moves around the object. The eye moves to see the other side of the body, but the pupil normally does not move or moves in the opposite direction, so that the pupil is stabilized and the eye is fixed and stare at the object. If a graphic three-dimensional object is projected to display an image, the image is projected by projecting it in the main view direction at the viewpoint. We must express the viewpoint and the viewpoint in three-dimensional world coordinates. The image of the currently displayed screen is projected by applying the previous point TCoi. The next time point TCi = TCoi + Ci * Mr is applied if the current amount of movement Ci. Mr is greater than 0 and smaller than or equal to 1, and Mr is made smaller so that it is controlled to be natural, and Mr is made closer to 1, so that the user can react quickly and display the image. Calculate the left and right eye movements at regular time intervals. The upward direction is a direction perpendicular to the line connecting the two eyes in the x-y plane, and the line connecting the two eyes is controlled so as to be parallel to the screen. The stereoscopic image is projected by applying the current time TCi and the upward direction, and is reproduced and viewed. In order to view a stereoscopic object, movement of the main viewpoint and movement of the viewpoint are generally not performed at the same time. First, move the center of the viewpoint by moving the main viewpoint, and move the viewpoint to see the side of the object. That is, movement of the main point is achieved by rotating the eyes around the neck or moving the pupils. If the movement amount Pi of the current main viewpoint is TPi = TPoi + Pi * Mr, which is applied next time. Since the eye moves around the throat axis and the focus moves, the movement Pi of the main viewpoint is the movement Ci of the eye multiplied by (z / s). However, the movement of the viewpoint moves the body by moving the eyes around the object. That is, whether the viewer's body moves or only the eyes move can be distinguished from the movement of the viewpoint or the movement of the viewpoint. It is possible to view the projected image by controlling the viewpoint and the main viewpoint in the stereoscopic projection considering the movement of the eye as the movement of the main viewpoint or the movement of the viewpoint by judging whether the viewer moves and the eye is moved in the image captured by the viewer have.

A stereoscopic image can be reproduced by projecting a stereoscopic image to the viewpoint of each eye. When a stereoscopic image other than a stereoscopic image is viewed, a stereoscopic image that reproduces two images, which can be seen by left and right eyes, In the case of viewing one image different from the display image, it is possible to detect the movement of the eyes and pupils of the two viewers and to control the projection time and the main viewpoint so that the viewer can view the center of the two eyes or the left eye It is possible to project an image as a viewpoint of a viewpoint or a viewpoint of a viewpoint of the right eye to reproduce an image.

As described above, the calculation of the viewpoint and the viewpoint according to the movement of the detected eye can be made different according to the characteristics of the video playback apparatus or according to the characteristics of the displayed video, and the change of the viewpoint and the viewpoint can be appropriately applied, Controls the image of the device.

There is no description of the sign.

Claims (11)

The image processing apparatus includes a photographing device, which photographs the eyes of the viewer, arranges the standard mask image including the eyes in two directions perpendicular to the center of the eye, in the rotational direction, and one-dimensional The correlation between the information and the one - dimensional information of the photographed image is calculated, and the 'eye - tracking' step of tracking the eye from the entire captured image and the 'moving - tracking' And estimating the distance between the eye and the photographing apparatus with a zoom magnification of the mask, and predicting the viewpoint based on the position of the eye and / or the sight line of the eye.
The method of claim 1, wherein the movement of the viewer's pupil is detected with image information of a plurality of standard mask images whose pupil is out of the center of the eye.
The method according to claim 1, wherein the standard mask image is replaced with a new mask image extracted from the captured image of the viewer.
The method according to claim 1, wherein the one-dimensional mask image information and the one-dimensional image information of the photographed image are subjected to a one-dimensional filter to block the low-frequency components and calculate the correlation.
The image processing apparatus includes a photographing device, which photographs the eyes of the viewer, arranges the standard mask image including the eyes in two directions perpendicular to the center of the eye, in the rotational direction, and one-dimensional The correlation between the information and the one - dimensional information of the photographed image is calculated, and the 'eye - tracking' step of tracking the eye from the entire captured image and the 'moving - tracking' , The distance between the eye and the photographing machine is predicted by the enlargement / reduction magnification of the mask, and the viewpoint by the eye position and / or the eye view is predicted, In the image display device.
The method of claim 5, wherein the viewpoint of the viewer is moved forward to enlarge the image, the viewpoint moves backward to reduce the image, and the image is moved so that the portion of the image or image where the viewer's watchpoint is located is positioned at a predetermined position on the screen, In the image display device.
The method of claim 5, wherein the viewpoint of the viewer is located on the right side of the center of the image on the screen so that the image moves to the left, the viewpoint is on the left side of the center of the image on the screen, And moves the image upward to move the image upward so that the image is controlled.
The video display device according to claim 5, wherein the viewer controls the function-key of the screen at the main viewpoint, and selects the function-key of the screen to perform the function-key function.
The method of claim 5, wherein the function-key generated by the viewpoint of the viewer is a pointer for selecting a selection-key by designating a selection-key, And the pointer is moved to select the selection key, and performs the function of the selected selection key.
The image display device according to claim 5, wherein a projected image is reproduced and displayed by controlling the projection start point and the start point of the graphic three-dimensional object according to the viewpoint and the view point of the viewer.
11. The image display device according to claim 10, wherein two projection images are reproduced and stereoscopically displayed by controlling a point of time and a main point of projection of a graphic solid by two viewpoints and one main point of view.
KR1020130119628A 2012-10-31 2013-10-08 Method of display control for image KR20140055987A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20120121686 2012-10-31
KR1020120121686 2012-10-31

Publications (1)

Publication Number Publication Date
KR20140055987A true KR20140055987A (en) 2014-05-09

Family

ID=50887504

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130119628A KR20140055987A (en) 2012-10-31 2013-10-08 Method of display control for image

Country Status (1)

Country Link
KR (1) KR20140055987A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108962182A (en) * 2018-06-15 2018-12-07 广东康云多维视觉智能科技有限公司 3-D image display device and its implementation based on eyeball tracking
US10884488B2 (en) 2014-11-24 2021-01-05 Samsung Electronics Co., Ltd Electronic device and method for controlling display

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10884488B2 (en) 2014-11-24 2021-01-05 Samsung Electronics Co., Ltd Electronic device and method for controlling display
CN108962182A (en) * 2018-06-15 2018-12-07 广东康云多维视觉智能科技有限公司 3-D image display device and its implementation based on eyeball tracking

Similar Documents

Publication Publication Date Title
US20220084279A1 (en) Methods for manipulating objects in an environment
US10674142B2 (en) Optimized object scanning using sensor fusion
Hirzle et al. A design space for gaze interaction on head-mounted displays
KR101815020B1 (en) Apparatus and Method for Controlling Interface
EP2278823A2 (en) Stereo image interaction system
US10203837B2 (en) Multi-depth-interval refocusing method and apparatus and electronic device
KR20170031733A (en) Technologies for adjusting a perspective of a captured image for display
KR20130108643A (en) Systems and methods for a gaze and gesture interface
US11720171B2 (en) Methods for navigating user interfaces
EP2558924B1 (en) Apparatus, method and computer program for user input using a camera
KR20120068253A (en) Method and apparatus for providing response of user interface
TW201427388A (en) Image interaction system, detecting method for detecting finger position, stereo display system and control method of stereo display
KR20150040580A (en) virtual multi-touch interaction apparatus and method
CN114647317A (en) Remote touch detection enabled by a peripheral device
JP2012238293A (en) Input device
JP5341126B2 (en) Detection area expansion device, display device, detection area expansion method, program, and computer-readable recording medium
US11443719B2 (en) Information processing apparatus and information processing method
JPWO2020080107A1 (en) Information processing equipment, information processing methods, and programs
CN113504830A (en) Display method and device for head-mounted display equipment
KR20140055987A (en) Method of display control for image
US20230092874A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
CN111651043B (en) Augmented reality system supporting customized multi-channel interaction
TW201925989A (en) Interactive system
US20240103680A1 (en) Devices, Methods, and Graphical User Interfaces For Interacting with Three-Dimensional Environments
Jung et al. Interactive auto-stereoscopic display with efficient and flexible interleaving

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E601 Decision to refuse application