WO2015062251A1 - 显示装置及其控制方法、和手势识别方法 - Google Patents

显示装置及其控制方法、和手势识别方法 Download PDF

Info

Publication number
WO2015062251A1
WO2015062251A1 PCT/CN2014/078074 CN2014078074W WO2015062251A1 WO 2015062251 A1 WO2015062251 A1 WO 2015062251A1 CN 2014078074 W CN2014078074 W CN 2014078074W WO 2015062251 A1 WO2015062251 A1 WO 2015062251A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
user
unit
eye
control screen
Prior art date
Application number
PCT/CN2014/078074
Other languages
English (en)
French (fr)
Inventor
冷长林
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US14/426,012 priority Critical patent/US20160041616A1/en
Publication of WO2015062251A1 publication Critical patent/WO2015062251A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the invention belongs to the technical field of gesture recognition, and particularly relates to a display device, a control method thereof, and a gesture recognition method. Background technique
  • a display device having a gesture recognition function includes a display unit for performing display, and an image collection unit (camera, camera, etc.) for collecting gestures, which can determine that the user wants to analyze the collected image. The operation performed.
  • the “select” and “determine” operations must be performed separately through different gestures, and the operation is troublesome. For example, if the television is changed by the gesture, the first gesture (such as waving from left to right) is selected first. Taiwan, each time the wave is changed once, when the correct station number is selected, the second gesture (such as waving from top to bottom) enters the station. That is to say, the gesture recognition technology of the existing display device cannot implement the operation of "selecting" and “determining”, that is, it cannot "touch” one of the plurality of candidate icons, like a tablet computer, once. Select the instruction to execute and execute it. This is so because the "click" operation must accurately determine the click location.
  • the hand is directly on the screen, so it is feasible to determine the click position by touch technology.
  • the hand usually cannot touch the display unit (especially for the TV, the user is far away from the TV display during normal use), and can only "point” to a certain position of the display unit (such as an icon displayed by the display unit).
  • this long-distance "pointing" accuracy is very poor.
  • the gestures of different users may be different. Some people point to the left and some point to the right, so it is impossible to determine where the user wants to point. , you can not achieve the "click" operation.
  • the technical problem to be solved by the present invention includes a problem that the "select” and “determine” operations must be separately performed in the existing gesture recognition, and a display capable of achieving “selection” and “determination” operations by gesture recognition is provided in one step.
  • the device and its control method, and gesture recognition method are provided in one step.
  • a technical solution for solving the technical problem to be solved by the present invention is a control method of a display device, comprising: a tree-eye 3D display unit displaying a virtual 3D control screen, wherein the virtual 3D control screen and the user's eyes The virtual distance is equal to the first distance, the first distance is smaller than the distance between the 3D display unit of the eye and the user's eyes; the image collection unit collects the image of the click action of the user on the virtual 3D control screen; the gesture recognition unit is configured according to the image The image collected by the unit determines the click position of the virtual 3D control screen by the user, and sends a control instruction corresponding to the click position to the corresponding execution unit.
  • the first distance is less than or equal to the length of the user's arm.
  • the first distance is less than or equal to 0.5 meters and greater than or equal to 0.25 meters.
  • the virtual 3D control screen is distributed over the entire display screen for displaying the virtual 3D control screen; or the virtual 3D control screen is a part of a display screen for displaying the virtual 3D control screen.
  • the virtual 3D control picture is divided into at least two areas, and each area corresponds to one control instruction.
  • the method further includes: the positioning unit determines the position of the user relative to the eye 3D display unit; and the gesture recognition unit according to the image
  • the image captured by the collection unit determines that the user clicks on the virtual 3D control screen includes: the gesture recognition unit determines the user's virtual 3D control according to the image collected by the image collection unit and the position of the user relative to the eye 3D display unit The click position of the screen.
  • the positioning unit determines that the position of the user relative to the eye 3D display unit comprises: the positioning unit analyzes the image collected by the image collection unit to determine the position of the user relative to the eye 3D display unit.
  • the technical solution adopted to solve the technical problem to be solved by the present invention is a display
  • the display device includes: a eye 3D display unit capable of displaying a virtual 3D control screen, the virtual distance between the virtual 3D control screen and the user's eyes being equal to a first distance, the first distance being smaller than the eye 3D display unit and a distance between the eyes of the user; an image collection unit for collecting an image of the user's click action on the virtual 3D control screen; a gesture recognition unit for determining the user to the virtual 3D according to the image collected by the image collection unit Control the click position of the screen, and send the control command corresponding to the click position to the corresponding execution unit.
  • the eye 3D display unit is a television display or a computer display.
  • the eye 3D display unit is any one of a raster type 3D display unit, a prism film type 3D display unit, and a pointing light source type 3D display unit.
  • the display device further includes: a positioning unit configured to determine a position of the user relative to the eye 3D display unit.
  • the positioning unit is configured to analyze the image collected by the image collection unit to determine the position of the user relative to the eye 3D display unit.
  • a technical solution for solving the technical problem to be solved by the present invention is a gesture recognition method, comprising: a eye 3D display unit displaying a virtual 3D control screen, wherein a virtual distance between the virtual 3D control screen and a user's eyes Equal to the first distance, the first distance is smaller than the distance between the 3D display unit of the eye and the user's eyes; the image collection unit collects an image of the click action of the user on the virtual 3D control screen; the gesture recognition unit is configured according to the image collection unit The captured image determines the user's click position on the virtual 3D control screen, and sends a control command corresponding to the click position to the corresponding execution unit.
  • the eye 3D display unit refers to a display unit that allows a user to see a stereoscopic 3D image without using 3D glasses.
  • the "virtual 3D control screen” refers to a stereoscopic control screen displayed by the eye 3D display unit for realizing control of the display device.
  • “virtual distance” refers to the distance between the virtual 3D control screen that the user feels and himself.
  • the sense of distance is part of the three-dimensional sense, which is the picture seen by the left and right eyes.
  • the user can feel that the virtual 3D control screen is located at a certain distance in front of the user, even if the user is away from or close to the eye 3D display unit, the virtual feeling is felt.
  • the distance between the 3D control screen and itself is always the same.
  • execution unit refers to any unit that can execute the corresponding control instruction.
  • the execution unit is the eye 3D display unit
  • the execution unit is the sounding unit.
  • the tree 3D display unit can present a virtual 3D control picture for the user, and the distance between the virtual 3D control picture and the user is smaller than the eye 3D display unit and the user.
  • the distance between the users so the user will feel that the virtual 3D control screen is very close to himself (just in front of him), and can directly reach the virtual 3D control screen accurately, so that when different users click the same position of the virtual 3D control screen
  • the same or similar so that the gesture recognition unit can accurately determine the click position desired by the user, thereby implementing a "click" operation of "selecting" and "determining".
  • the invention is used for the control of a display device, and is particularly suitable for the control of a television.
  • Fig. 1 is a flow chart showing a method of controlling a display device according to a first embodiment of the present invention.
  • Fig. 2 is a view showing a state in which the display device of the first embodiment of the present invention displays a virtual 3D control screen.
  • the embodiment provides a control method for a display device.
  • the display device to which the method is applied includes a tree eye 3D display unit, an image collection unit, a gesture recognition unit, and preferably a positioning unit.
  • the eye 3D display unit refers to any display unit that enables a user to see a stereoscopic 3D image without using 3D glasses.
  • the eye 3D display unit is any one of a raster type 3D display unit, a prism film type 3D display unit, and a pointing light source type 3D display unit.
  • the above three display units are all known eye 3D display units.
  • the raster type 3D display unit is arranged outside the 2D display device, and the grating can block different areas of the display device respectively for the left eye and the right eye of the user, so that the left eye and the right eye of the user see the display device Different areas, that is, the contents seen by both eyes are different, thereby achieving the effect of 3D display.
  • the prism sheet is disposed outside the 2D display device, and the light from different positions of the display device is respectively directed to the left and right eyes of the user through the refraction of the small prism in the prism sheet, thereby making the user Both eyes see different content to achieve 3D effect.
  • the display module has a special structure, and the light emitting directions of different positions (such as a backlight) are different, and the light emitted by the light source at different positions is respectively directed to the left and right eyes of the user. So that the left and right eyes see different content to achieve the 3D effect.
  • the image collection unit is used to collect images of the user, which may be known devices such as a CCD (Charge Coupled Device) camera or a camera. From a convenient point of view, the image collection unit may be disposed near the eye 3D display unit (e.g., fixed above or to the side of the eye 3D display unit), or may be integrally formed with the eye 3D display unit.
  • CCD Charge Coupled Device
  • the above control method includes the following steps S01 to S04.
  • the eye 3D display unit displays a virtual 3D control screen.
  • the virtual distance between the virtual 3D control screen and the user's eyes is equal to the first distance, and the first distance is smaller than the distance between the eye 3D display unit and the user's eyes.
  • the virtual 3D control screen is specifically used to control the display device.
  • the operation screen includes various control commands for the eye 3D display unit, and the user can realize different control of the display device by selecting different control commands.
  • the eye 3D display unit 1 displays the virtual 3D control screen 4, and the user feels that the virtual 3D control screen 4 is located at a certain distance (first distance) in front of itself, and the first distance is smaller than the eye 3D.
  • the distance between the unit 1 and the user is displayed. Since the user feels that the virtual 3D control screen 4 is close to himself, the user can be made to accurately "click” on a certain position of the screen, so that the display device can more accurately determine what operation the user wants to perform. "Click" control.
  • the first distance is less than or equal to the length of the user's arm.
  • the user feels that he can "touch" the virtual 3D control screen 4 by hand, thus maximally ensuring the accuracy of the click action.
  • the first distance is less than or equal to 0.5 meters and greater than or equal to 0.25 meters. According to the range of the first distance, most people do not have to straighten their arms to "reach" the virtual 3D control screen 4, nor do they think that the virtual 3D control screen 4 is too close to itself.
  • the virtual 3D control screen 4 is spread over the entire display screen for displaying the virtual 3D control screen 4. That is to say, when the virtual 3D control screen 4 is displayed, the virtual 3D control screen 4 is the entire display content, and the user can only see the virtual 3D control screen 4, so that the virtual 3D control screen 4 has a larger area and can accommodate More control commands to be selected, and the click accuracy is higher.
  • the virtual 3D control screen 4 may be a part of the entire display screen for displaying the virtual 3D control screen 4. That is to say, the virtual 3D control picture 4 is displayed together with a normal picture (such as a 3D movie), and the virtual 3D control picture 4 seen by the user can be located on the side or corner of the display picture, so that the user can simultaneously view the regular picture and the virtual picture.
  • the 3D controls the screen 4 so that it can be controlled at any time (such as adjusting the volume, changing channels, etc.).
  • the virtual 3D control screen 4 when the virtual 3D control screen 4 is spread over the entire display screen for displaying the virtual 3D control screen 4, it is preferably displayed when certain conditions are met, such as when the user issues an instruction, and the normal screen is still displayed in other cases.
  • the virtual 3D control screen 4 when the virtual 3D control screen 4 is a part of the display screen for displaying the virtual 3D control screen 4, it can be continuously displayed.
  • the virtual 3D control screen 4 is divided into at least two areas, and each area corresponds to one control command. That is to say, the virtual 3D control screen 4 can be divided into a plurality of different areas, and different control commands can be executed by clicking different areas, so that a plurality of different operations can be performed through one virtual 3D control screen 4. For example, as shown in FIG.
  • the virtual 3D control screen 4 can be equally divided into a total of 9 rectangular regions of 3 rows and 3 columns, and each rectangular region corresponds to a control command (such as changing the volume, changing the station number, changing the brightness, Exit the virtual 3D control screen 4, etc.).
  • a control command such as changing the volume, changing the station number, changing the brightness, Exit the virtual 3D control screen 4, etc.
  • the virtual 3D control screen 4 corresponds to only one control command (for example, the virtual 3D control screen 4 is a part of the display screen for displaying the virtual 3D control screen 4, the corresponding command is "Enter full screen control screen") feasible.
  • the image collection unit collects an image of a user's click action on the virtual 3D control screen.
  • the image collection unit 5 fixed above the eye 3D display unit 1 collects the image of the click action of the virtual 3D control screen 4 by the user's hand. That is, when the eye 3D display unit 1 displays the virtual 3D control screen 4, the image collection unit 5 is turned on, thereby collecting an image of the user's motion, specifically collecting the user's hand 3 to perform the virtual 3D control screen 4. Click on the image of the action.
  • the image collection unit 5 can also be turned on, thereby collecting images of other gestures of the user or for determining the user position.
  • the positioning unit determines the position (distance and/or angle) of the user relative to the eye 3D display unit.
  • the positioning unit (not shown) can determine the position of the user relative to the eye 3D display unit 1 by analyzing the image collected by the image collection unit 5. For example, when the virtual 3D control screen 4 is displayed, the first image collected by the image collection unit 5 can be used to determine the user's relative 3D display. The position of the unit 1 is shown, and the images collected later are used for gesture recognition.
  • the method of judging the position of the user with respect to the 3D display unit 1 of the eye according to the image of the collection is also various, for example, the contour of the user or the outline of the user's eye 2 can be obtained by contour analysis, thereby determining the position of the user.
  • an infrared range finder can be set at two different positions, and the distance between the user and the user measured by the two infrared range finder is calculated. User location.
  • the gesture recognition unit determines the click position of the virtual 3D control screen by the user according to the image collected by the image collection unit (and the position of the user relative to the 3D display unit of the eye), and sends a control instruction corresponding to the click position to the corresponding Execution unit.
  • the gesture recognition unit (not shown) The spatial position of the virtual 3D control screen 4 relative to the eye 3D display unit 1 can be confirmed (since the virtual 3D control screen 4 is necessarily located on the line connecting the eye 3D display unit 1 and the user), and at the same time, when the user reaches 3 clicks on the virtual 3D
  • the gesture recognition unit may also confirm the clicked spatial position (ie, the position of the hand 3) according to the collected image (the image collection unit 5 is also known with respect to the position of the eye 3D display unit 1), and further Confirming the position of the virtual 3D control screen 4 corresponding to the click position, that is, determining a control instruction corresponding to the user gesture, so that the gesture recognition unit can send the control instruction to the corresponding execution unit, so that the execution unit executes the corresponding instruction.
  • the gesture recognition unit may also confirm the clicked spatial position (ie, the position of the hand 3) according to the collected image (the image collection unit 5 is also known with respect to the position of the eye 3D display unit 1), and further Confirming
  • execution unit refers to any unit that can execute the corresponding control instruction.
  • the execution unit is the eye 3D display unit 1
  • the execution unit is the sounding unit.
  • the user position may be determined according to the default position, or the relative position of the user's hand and the body may be determined. Relationship determines what the user wants to click Location (because the relative positional relationship of the virtual 3D control screen 4 to the user is known).
  • the embodiment further provides a display device controllable by using the above method, comprising: a tree eye 3D display unit 1 for displaying, capable of displaying a virtual 3D control screen 4, the virtual 3D control screen 4 and a user
  • the virtual distance between the eyes 2 is equal to the first distance, the first distance is smaller than the distance between the 3D display unit 1 and the user's eyes 2
  • the image collection unit 5 is configured to collect the user's view on the virtual 3D control screen 4. Clicking on the image of the action;
  • the gesture recognition unit is configured to determine the click position of the virtual 3D control screen 4 by the user according to the image collected by the image collection unit 5, and send the control instruction corresponding to the click position to the corresponding execution unit.
  • the eye 3D display unit 1 is a television display or a computer display.
  • the eye 3D display unit 1 is any one of a raster type 3D display unit, a prism film type 3D display unit, and a pointing light source type 3D display unit.
  • the display device further includes: a positioning unit configured to determine a position of the user relative to the eye 3D display unit 1.
  • the positioning unit is configured to analyze the image collected by the image collection unit 5 to determine the position of the user relative to the eye 3D display unit 1.
  • Example 2
  • the embodiment provides a gesture recognition method, including: the eye 3D display unit displays a virtual 3D control screen, where the virtual distance between the virtual 3D control screen and the user's eyes is equal to the first distance, and the first distance is smaller than The distance between the 3D display unit and the user's eyes; the image collection unit collects an image of the user's click action on the virtual 3D control screen; the gesture recognition unit determines the user's virtual 3D control screen according to the image collected by the image collection unit Clicking on the location and sending the control command corresponding to the click location to the corresponding execution unit.
  • the above-described gesture recognition method is not limited to use for controlling the display device, and it can also be used to control other devices as long as the gesture recognition unit transmits (eg, wirelessly) the control command to the corresponding device.
  • a number of specialized gesture recognition systems can be used to control a wide range of devices such as televisions, computers, air conditioners, and washing machines. While an exemplary embodiment is employed, the invention is not limited thereto. Various modifications and improvements can be made by those skilled in the art without departing from the spirit and scope of the invention. These modifications and improvements are also considered to be within the scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明提供一种显示装置及其控制方法、和手势识别方法,属于手势识别技术领域,其可解决现有的手势识别中选择和确定操作须分别进行的问题。本发明的显示装置的控制方法包括:裸眼3D显示单元显示虚拟3D控制画面,其中,所述虚拟3D控制画面与用户眼睛间的虚拟距离等于第一距离,所述第一距离小于裸眼3D显示单元与用户眼睛间的距离;图像采集单元采集用户对虚拟3D控制画面的点击动作的图像;手势识别单元根据图像采集单元所采集的图像判断用户对虚拟3D控制画面的点击位置,并将点击位置所对应的控制指令发送给相应的执行单元。本发明可用于显示装置的控制,尤其适用于电视的控制。

Description

显示装置及其控制方法、 和手势识别方法 技术领域
本发明属于手势识别技术领域, 具体涉及显示装置及其控制 方法、 和手势识别方法。 背景技术
随着技术发展, 用手势对显示装置(电视、显示器等)进行控制 已成为可能。 具有手势识别功能的显示装置包括用于进行显示的 显示单元、 以及用于釆集手势的图像釆集单元 (摄像头、 相机等), 其通过对所釆集的图像进行分析, 即可确定用户要进行的操作。
目前的手势识别技术中, "选择" 和 "确定" 操作必须通过 不同手势分别进行, 操作麻烦, 例如要通过手势为电视换台, 则 先要通过第一手势 (如从左向右挥手)选台, 每挥手一次台号变一 次, 当选到正确台号时, 再通过第二手势 (如从上向下挥手)进入该 台。 也就是说, 现有显示装置的手势识别技术不能实现 "选择" 与 "确定"合一的操作,即不能像平板电脑一样通过 "点击 (Touch)" 多个候选图标中的某个, 一次性选出要执行的指令并执行该指令。 之所以如此, 是因为 "点击" 操作必须准确判断点击位置。 对平 板电脑, 手直接点在屏幕上, 故通过触控技术确定点击位置是可 行的。但对手势识别技术,手通常不能接触显示单元 (尤其对电视, 正常使用时用户离电视显示屏很远), 而只能 "指向" 显示单元的 某位置 (如显示单元显示的某图标), 但这种远距离的 "指向" 准确 度很差, 在指向显示单元的同一位置时, 不同用户的手势可能不 同, 有人指的偏左, 有人指的偏右, 故无法确定用户到底想指哪 里, 也就不能实现 "点击" 操作。 发明内容 本发明所要解决的技术问题包括,针对现有的手势识别中 "选 择" 和 "确定" 操作必须分别进行的问题, 提供一种可通过手势 识别实现 "选择" 和 "确定" 操作一步完成的显示装置及其控制 方法、 和手势识别方法。
解决本发明所要解决的技术问题所釆用的技术方案是一种显 示装置的控制方法, 其包括: 棵眼 3D显示单元显示虚拟 3D控制 画面, 其中, 所述虚拟 3D控制画面与用户眼睛间的虚拟距离等于 第一距离,所述第一距离小于棵眼 3D显示单元与用户眼睛间的距 离; 图像釆集单元釆集用户对虚拟 3D 控制画面的点击动作的图 像; 手势识别单元根据图像釆集单元所釆集的图像判断用户对虚 拟 3D控制画面的点击位置,并将点击位置所对应的控制指令发送 给相应的执行单元。
优选的, 所述第一距离小于等于用户手臂的长度。
优选的, 所述第一距离小于等于 0.5米且大于等于 0.25米。 优选的, 所述虚拟 3D控制画面遍布用于显示所述虚拟 3D控 制画面的整个显示画面; 或, 所述虚拟 3D控制画面为用于显示所 述虚拟 3D控制画面的显示画面的一部分。
优选的, 所述虚拟 3D 控制画面分为至少两个区域, 每个区 域对应 1个控制指令。
优选的, 在手势识别单元根据图像釆集单元所釆集的图像判 断用户对虚拟 3D控制画面的点击位置之前, 还包括: 定位单元判 断用户相对棵眼 3D显示单元的位置;手势识别单元根据图像釆集 单元所釆集的图像判断用户对虚拟 3D控制画面的点击位置包括: 手势识别单元根据图像釆集单元所釆集的图像、 以及用户相对棵 眼 3D显示单元的位置判断用户对虚拟 3D控制画面的点击位置。
进一步优选的, 定位单元判断用户相对棵眼 3D 显示单元的 位置包括: 定位单元对由图像釆集单元釆集的图像进行分析, 从 而判断用户相对棵眼 3D显示单元的位置。 解决本发明所要解决的技术问题所釆用的技术方案是一种显 示装置, 其包括: 棵眼 3D显示单元, 其能够显示虚拟 3D控制画 面, 所述虚拟 3D 控制画面与用户眼睛间的虚拟距离等于第一距 离, 所述第一距离小于棵眼 3D显示单元与用户眼睛间的距离; 图 像釆集单元,其用于釆集用户对虚拟 3D控制画面的点击动作的图 像; 手势识别单元, 其用于根据图像釆集单元所釆集的图像判断 用户对虚拟 3D控制画面的点击位置,并将点击位置所对应的控制 指令发送给相应的执行单元。
优选的, 所述棵眼 3D显示单元为电视显示屏或电脑显示屏。 优选的, 所述棵眼 3D显示单元为光栅式 3D显示单元、 棱镜 膜式 3D显示单元、 指向光源式 3D显示单元中的任意一种。
优选的, 所述显示装置还包括: 定位单元, 其用于判断用户 相对棵眼 3D显示单元的位置。
进一步优选的, 所述定位单元用于对由图像釆集单元釆集的 图像进行分析, 从而判断用户相对棵眼 3D显示单元的位置。 解决本发明所要解决的技术问题所釆用的技术方案是一种手 势识别方法, 其包括: 棵眼 3D显示单元显示虚拟 3D控制画面, 其中,所述虚拟 3D控制画面与用户眼睛间的虚拟距离等于第一距 离, 所述第一距离小于棵眼 3D显示单元与用户眼睛间的距离; 图 像釆集单元釆集用户对虚拟 3 D控制画面的点击动作的图像;手势 识别单元根据图像釆集单元所釆集的图像判断用户对虚拟 3D 控 制画面的点击位置, 并将点击位置所对应的控制指令发送给相应 的执行单元。 其中, 棵眼 3D显示单元是指能在不使用 3D眼镜的情况下, 使用户棵眼即能看到有立体感的 3D图像的显示单元。
其中, "虚拟 3D控制画面" 是指由棵眼 3D显示单元显示的 有立体感的控制画面, 其用于实现对显示装置的控制。
其中, "虚拟距离" 是指用户感到的虚拟 3D 控制画面与自 己的距离。 距离感是立体感的一部分, 其是由左右眼所看到的图 像的差别引起的, 故只要棵眼 3D显示单元显示特定的内容, 即可 使用户感到虚拟 3D控制画面位于自己前方一定距离处,即使用户 远离或靠近棵眼 3D显示单元, 其感觉到的虚拟 3D控制画面与自 己的距离始终不变。
其中, "执行单元" 是指可执行相应控制指令的任何单元, 例如, 针对换台指令, 执行单元就是棵眼 3D显示单元, 而针对改 变音量的指令, 执行单元就是发声单元。 在本发明的显示装置及其控制方法、 和手势识别方法中, 棵 目艮 3D显示单元可为用户呈现虚拟 3D控制画面, 且虚拟 3D控制 画面与用户间的距离小于棵眼 3D显示单元与用户间的距离,故用 户会感觉虚拟 3D控制画面离自己很近(就在面前), 可直接伸手 准确地 "点击" 虚拟 3D控制画面, 这样, 不同用户点击虚拟 3D 控制画面的同一位置时的动作是相同或相似的, 从而手势识别单 元可准确地判断用户希望的点击位置, 进而实现 "选择" 与 "确 定" 合一的 "点击" 操作。 本发明用于显示装置的控制, 尤其适用于电视的控制。 附图说明
图 1为本发明的实施例 1的显示装置的控制方法的流程图。 图 2为本发明的实施例 1 的显示装置显示虚拟 3D控制画面 时的示意图。
附图标记: 1、 棵眼 3D显示单元; 2、 用户的眼睛; 3、 用户 的手; 4、 虚拟 3D控制画面; 5、 图像釆集单元。 具体实施方式
为使本领域技术人员更好地理解本发明的技术方案, 下面结 合附图和具体实施方式对本发明作进一步详细描述。 实施例 1 :
本实施例提供一种显示装置的控制方法, 该方法适用的显示 装置包括棵眼 3D显示单元、 图像釆集单元、 手势识别单元, 优选 还包括定位单元。
其中, 棵眼 3D显示单元是指能在不使用 3D眼镜的情况下, 使用户棵眼即能看到有立体感的 3D图像的任何显示单元。
优选的, 棵眼 3D显示单元为光栅式 3D显示单元、 棱镜膜式 3D显示单元、 指向光源式 3D显示单元中的任意一种。
以上 3种显示单元都是已知的棵眼 3D显示单元。
其中, 光栅式 3D显示单元是在 2D显示设备外设置光栅, 针 对用户的左眼和右眼, 光栅可分别挡住显示设备的不同区域, 这 样, 用户的左眼和右眼看到的是显示设备的不同区域, 即双眼看 到的内容不同, 从而达到 3D显示的效果。
对于棱镜膜式 3D显示单元, 其是在 2D显示设备外设置棱镜 片, 通过棱镜片中小棱镜的折射作用, 将来自显示设备不同位置 的光分别射向用户的左眼和右眼, 从而使用户的双眼分别看到不 同的内容, 以达到 3D效果。
而指向光源式 3D 显示单元中, 其显示模组具有特殊结构, 不同位置的发光光源(例如背光源)的出光方向不同,不同位置的发 光光源发出的光分别射向用户的左眼和右眼, 从而使左眼和右眼 看到不同的内容, 以达到 3D效果。
图像釆集单元则用于釆集用户的图像,其可为 CCD (电荷耦合 元件)摄像头、 相机等已知器件。 从方便的角度说, 图像釆集单元 可设在棵眼 3D显示单元附近 (如固定在棵眼 3D显示单元上方或侧 面)、 或与棵眼 3D显示单元设计成一体结构。
具体的,如图 1所示,上述控制方法包括以下步骤 S01至 S04。
S01、棵眼 3D显示单元显示虚拟 3D控制画面,虚拟 3D控制 画面与用户眼睛间的虚拟距离等于第一距离, 第一距离小于棵眼 3D显示单元与用户眼睛间的距离。
其中, 虚拟 3D 控制画面是指专门用于对显示装置进行控制 操作的画面, 其中包括对棵眼 3D显示单元的各种控制指令, 用户 通过选择不同控制指令即可实现对显示装置的不同控制。
如图 2所示, 棵眼 3D显示单元 1显示虚拟 3D控制画面 4, 用户会感觉到虚拟 3D控制画面 4即位于自己前方一定距离(第一 距离)处, 而该第一距离小于棵眼 3D显示单元 1与用户间的距离。 由于用户感觉到虚拟 3D控制画面 4与自己的距离较近,故可伸手 3做出准确 "点击"该画面某位置的动作, 从而显示装置也可更准 确的判断用户要进行什么操作, 以实现 "点击" 控制。
优选的, 第一距离小于等于用户手臂的长度。 在第一距离小 于等于用户手臂的长度时, 用户感觉自己伸手即可 "接触" 虚拟 3D控制画面 4, 这样, 可最大程度地保证点击动作的准确性。
优选的, 第一距离小于等于 0.5米且大于等于 0.25米。 按照 该第一距离的范围, 绝大多数人既不用伸直手臂努力去 "达到" 虚拟 3D控制画面 4,也不会觉得虚拟 3D控制画面 4离自己太近。
优选的,虚拟 3D控制画面 4遍布用于显示所述虚拟 3D控制 画面 4 的整个显示画面。 也就是说, 当显示虚拟 3D控制画面 4 时, 虚拟 3D控制画面 4就是全部的显示内容, 用户只能看到虚拟 3D控制画面 4, 从而该虚拟 3D控制画面 4的面积较大, 可容纳 较多的待选控制指令, 且点击准确性较高。
优选的, 作为本实施例的另一种方式, 虚拟 3D 控制画面 4 也可以是用于显示所述虚拟 3D控制画面 4的整个显示画面的一部 分。 也就是说, 虚拟 3D控制画面 4与常规画面(如 3D电影)一同 显示,用户所看到的虚拟 3D控制画面 4可以位于显示画面的边上 或角落, 从而用户可同时看到常规画面和虚拟 3D控制画面 4, 以 便随时进行控制 (如调整音量、 换台等)。
其中, 当虚拟 3D控制画面 4遍布用于显示所述虚拟 3D控制 画面 4的整个显示画面时, 则其优选在满足一定条件 (如用户发出 指令)时才显示, 其他情况下仍显示常规画面。 而当虚拟 3D控制 画面 4是用于显示所述虚拟 3D控制画面 4的显示画面的一部分 时, 其可以一直持续显示。 优选的, 虚拟 3D控制画面 4分为至少两个区域, 每个区域 对应 1个控制指令。 也就是说, 虚拟 3D控制画面 4可分为多个不 同区域, 点击不同区域可执行不同控制指令, 从而通过一个虚拟 3D控制画面 4可进行多种不同操作。 例如, 可如图 2所示, 将虚 拟 3D控制画面 4等分为 3行 X 3列的共 9个矩形区域, 每个矩形 区域对应一个控制指令 (如改变音量、 改变台号、 改变亮度、 退出 虚拟 3D控制画面 4等)。
当然,若虚拟 3D控制画面 4只对应一个控制指令 (如虚拟 3D 控制画面 4为用于显示所述虚拟 3D控制画面 4的显示画面的一部 分, 其对应的指令为 "进入全屏控制画面" )也是可行的。
502、 图像釆集单元釆集用户对虚拟 3D控制画面的点击动作 的图像。
如图 2所示, 固定于棵眼 3D显示单元 1上方的图像釆集单 元 5釆集用户的手 3对虚拟 3D控制画面 4的点击动作的图像。也 就是说, 当棵眼 3D显示单元 1显示虚拟 3D控制画面 4时, 图像 釆集单元 5 开启, 从而釆集用户的动作的图像, 具体地釆集用户 的手 3对虚拟 3D控制画面 4进行点击的动作的图像。
当然, 在未显示虚拟 3D控制画面 4时, 图像釆集单元 5也 可开启, 从而用于釆集用户其他手势的图像或用于确定用户位置。
503、 可选的, 定位单元判断用户相对棵眼 3D显示单元的位 置 (距离和 /或角度)。
显然, 当用户与棵眼 3D显示单元 1 的相对位置不同时, 虽 然对用户来说其做出的控制动作并无变化(都是点击自己面前的 虚拟 3D控制画面 4),但对图像釆集单元 5来说, 其所釆集到的图 像却不相同。 为此, 最好能预先判断出用户与棵眼 3D显示单元 1 的相对位置关系, 从而在手势识别过程中进行更准确的识另 'J。
具体的, 作为一种优选方式, 定位单元 (图中未示出) 可通 过对由图像釆集单元 5 釆集的图像进行分析来判断用户相对棵眼 3D显示单元 1的位置。 例如, 当显示虚拟 3D控制画面 4时, 可 将图像釆集单元 5釆集的第一幅图像用于判断用户相对棵眼 3D显 示单元 1 的位置, 将之后釆集的图像用于手势识别。 根据釆集的 图像判断用户相对棵眼 3D显示单元 1的位置的方法也是多样的, 如可通过轮廓线分析得到用户形体或用户眼睛 2 的轮廓, 进而判 断用户位置。
当然, 判断用户相对棵眼 3D显示单元 1 的位置的方法还有 很多, 如可在两个不同位置设置红外测距器, 通过两个红外测距 器分别测得的与用户间的距离计算出用户位置。
当然, 如果不进行上述定位判断也是可行的, 因为对于棵眼 3D显示单元 1, 为保证观看效果, 用户通常位于棵眼 3D显示单 元 1前特定位置处, 故也可默认用户位置。
S04、 手势识别单元根据图像釆集单元所釆集的图像(以及用 户相对棵眼 3D显示单元的位置)判断用户对虚拟 3D控制画面的点 击位置, 并将点击位置所对应的控制指令发给相应的执行单元。
如前所述, 用户与棵眼 3D显示单元 1 的相对位置已知, 且 虚拟 3D控制画面 4位于用户之前一定距离处,因此,如图 2所示, 手势识别单元(图中未示出 )可确认虚拟 3D控制画面 4相对棵眼 3D显示单元 1的空间位置(因虚拟 3D控制画面 4必然位于棵眼 3D 显示单元 1与用户间的连线上), 同时, 当用户伸手 3点击虚拟 3D 控制画面 4时, 手势识别单元也可根据釆集的图像(图像釆集单元 5相对棵眼 3D显示单元 1的位置也已知)确认其所点击的空间位置 (即手 3的位置), 进而确认与点击位置对应的虚拟 3D控制画面 4 的位置, 也就是确定与用户手势对应的控制指令, 这样, 手势识 别单元就可将该控制指令发送给相应的执行单元, 使该执行单元 执行相应指令, 以实现控制。
其中, "执行单元" 是指可执行相应控制指令的任何单元, 例如, 针对换台指令, 执行单元就是棵眼 3D显示单元 1, 而针对 改变音量的指令, 执行单元就是发声单元。
如前所述,若用户与棵眼 3D显示单元 1的相对位置不确定(即 未进行步骤 S03), 则可以按照默认位置判断用户位置, 或者, 也 可通过判断用户的手与身体的相对位置关系判断用户要点击什么 位置(因为虚拟 3D控制画面 4与用户的相对位置关系已知)。 本实施例还提供一种可使用上述方法进行控制的显示装置, 其包括:用于进行显示的棵眼 3D显示单元 1,其能够显示虚拟 3D 控制画面 4, 所述虚拟 3D控制画面 4与用户眼睛 2间的虚拟距离 等于第一距离,所述第一距离小于棵眼 3D显示单元 1与用户眼睛 2间的距离; 图像釆集单元 5, 其用于釆集用户对虚拟 3D控制画 面 4 的点击动作的图像; 手势识别单元, 其用于根据图像釆集单 元 5所釆集的图像判断用户对虚拟 3D控制画面 4的点击位置,并 将点击位置所对应的控制指令发送给相应的执行单元。
优选的, 棵眼 3D显示单元 1为电视显示屏或电脑显示屏。 优选的, 棵眼 3D显示单元 1为光栅式 3D显示单元、 棱镜膜 式 3D显示单元、 指向光源式 3D显示单元中的任意一种。
优选的, 显示装置还包括: 定位单元, 用于判断用户相对棵 眼 3D显示单元 1的位置。
进一步优选的, 定位单元用于对由图像釆集单元 5釆集的图 像进行分析, 从而判断用户相对棵眼 3D显示单元 1的位置。 实施例 2:
本实施例提供一种手势识别方法, 其包括: 棵眼 3D 显示单 元显示虚拟 3D控制画面, 其中, 所述虚拟 3D控制画面与用户眼 睛间的虚拟距离等于第一距离,所述第一距离小于棵眼 3D显示单 元与用户眼睛间的距离;图像釆集单元釆集用户对虚拟 3D控制画 面的点击动作的图像; 手势识别单元根据图像釆集单元所釆集的 图像判断用户对虚拟 3D控制画面的点击位置,并将点击位置所对 应的控制指令发送给相应的执行单元。
也就是说,上述的手势识别方法并不限于用于控制显示装置, 其也可用于控制其他装置, 只要手势识别单元将控制指令发送 (如 通过无线方式)给相应的装置即可。 例如, 可通过一套专门的手势 识别系统对电视、 电脑、 空调、 洗衣机等许多装置进行统一控制。 而釆用的示例性实施方式, 然而本发明并不局限于此。 对于本领 域内的普通技术人员而言, 在不脱离本发明的精神和实质的情况 下, 可以做出各种变型和改进, 这些变型和改进也视为本发明的 保护范围。

Claims

1. 一种显示装置的控制方法, 其特征在于, 包括步骤: 棵眼 3D显示单元显示虚拟 3D控制画面,其中,所述虚拟 3D 控制画面与用户眼睛间的虚拟距离等于第一距离, 所述第一距离 小于棵眼 3D显示单元与用户眼睛间的距离;
图像釆集单元釆集用户对虚拟 3D 控制画面的点击动作的图 像;
手势识别单元根据图像釆集单元所釆集的图像判断用户对虚 拟 3D控制画面的点击位置,并将点击位置所对应的控制指令发送 给相应的执行单元。
2. 根据权利要求 1所述的显示装置的控制方法,其特征在于, 所述第一距离小于等于用户手臂的长度。
3. 根据权利要求 1所述的显示装置的控制方法,其特征在于, 所述第一距离小于等于 0.5米且大于等于 0.25米。
4. 根据权利要求 1所述的显示装置的控制方法,其特征在于, 所述虚拟 3D控制画面遍布用于显示所述虚拟 3D控制画面的 整个显示画面;
所述虚拟 3D控制画面为用于显示所述虚拟 3D控制画面的显 示画面的一部分。
5. 根据权利要求 1所述的显示装置的控制方法,其特征在于, 所述虚拟 3D 控制画面分为至少两个区域, 每个区域对应 1 个控制指令。
6. 根据权利要求 1至 5中任意一项所述的显示装置的控制方 法, 其特征在于,
在手势识别单元根据图像釆集单元所釆集的图像判断用户对 虚拟 3D控制画面的点击位置的步骤之前, 还包括: 定位单元判断 用户相对棵眼 3D显示单元的位置的步骤;
手势识别单元根据图像釆集单元所釆集的图像判断用户对虚 拟 3D控制画面的点击位置的步骤包括:手势识别单元根据图像釆 集单元所釆集的图像、以及用户相对棵眼 3D显示单元的位置判断 用户对虚拟 3D控制画面的点击位置。
7. 根据权利要求 6所述的显示装置的控制方法,其特征在于, 定位单元判断用户相对棵眼 3D显示单元的位置的步骤包括:
定位单元对由图像釆集单元釆集的图像进行分析, 从而判断 用户相对棵眼 3D显示单元的位置。
8. 一种显示装置, 其特征在于, 包括:
棵眼 3D显示单元, 其能够显示虚拟 3D控制画面, 所述虚拟 3D控制画面与用户眼睛间的虚拟距离等于第一距离, 所述第一距 离小于棵眼 3D显示单元与用户眼睛间的距离;
图像釆集单元, 其用于釆集用户对虚拟 3D 控制画面的点击 动作的图像;
手势识别单元, 其用于根据图像釆集单元所釆集的图像判断 用户对虚拟 3D控制画面的点击位置,并将点击位置所对应的控制 指令发送给相应的执行单元。
9. 根据权利要求 8所述的显示装置, 其特征在于,
所述棵眼 3D显示单元为电视显示屏或电脑显示屏。
10. 根据权利要求 8所述的显示装置, 其特征在于, 所述棵眼 3D显示单元为光栅式 3D显示单元、 棱镜膜式 3D 显示单元、 指向光源式 3D显示单元中的任意一种。
11. 根据权利要求 8至 10中任意一项所述的显示装置, 其特 征在于, 还包括:
定位单元, 其用于判断用户相对棵眼 3D显示单元的位置。
12. 根据权利要求 11所述的显示装置, 其特征在于, 所述定位单元用于对由图像釆集单元釆集的图像进行分析, 从而判断用户相对棵眼 3D显示单元的位置。
13. 一种手势识别方法, 其特征在于, 包括:
棵眼 3D显示单元显示虚拟 3D控制画面,其中,所述虚拟 3D 控制画面与用户眼睛间的虚拟距离等于第一距离, 所述第一距离 小于棵眼 3D显示单元与用户眼睛间的距离;
图像釆集单元釆集用户对虚拟 3D 控制画面的点击动作的图 像;
手势识别单元根据图像釆集单元所釆集的图像判断用户对虚 拟 3D控制画面的点击位置,并将点击位置所对应的控制指令发送 给相应的执行单元。
PCT/CN2014/078074 2013-10-31 2014-05-22 显示装置及其控制方法、和手势识别方法 WO2015062251A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/426,012 US20160041616A1 (en) 2013-10-31 2014-05-22 Display device and control method thereof, and gesture recognition method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310529219.6A CN103529947A (zh) 2013-10-31 2013-10-31 显示装置及其控制方法、手势识别方法
CN201310529219.6 2013-10-31

Publications (1)

Publication Number Publication Date
WO2015062251A1 true WO2015062251A1 (zh) 2015-05-07

Family

ID=49932020

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/078074 WO2015062251A1 (zh) 2013-10-31 2014-05-22 显示装置及其控制方法、和手势识别方法

Country Status (3)

Country Link
US (1) US20160041616A1 (zh)
CN (1) CN103529947A (zh)
WO (1) WO2015062251A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529947A (zh) * 2013-10-31 2014-01-22 京东方科技集团股份有限公司 显示装置及其控制方法、手势识别方法
CN103530060B (zh) * 2013-10-31 2016-06-22 京东方科技集团股份有限公司 显示装置及其控制方法、手势识别方法
CN109961478A (zh) * 2017-12-25 2019-07-02 深圳超多维科技有限公司 一种裸眼立体显示方法、装置及设备
CN112089589B (zh) * 2020-05-22 2023-04-07 未来穿戴技术有限公司 一种颈部按摩仪的控制方法、颈部按摩仪及存储介质
CN112613384B (zh) * 2020-12-18 2023-09-19 安徽鸿程光电有限公司 手势识别方法、手势识别装置及交互显示设备的控制方法
CN112613389A (zh) * 2020-12-18 2021-04-06 上海影创信息科技有限公司 眼部手势控制方法和系统及其vr眼镜

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101465957A (zh) * 2008-12-30 2009-06-24 应旭峰 一种虚拟三维场景中实现遥控互动的系统
CN101655739A (zh) * 2008-08-22 2010-02-24 原创奈米科技股份有限公司 一种三次元虚拟输入与仿真的装置
CN102508546A (zh) * 2011-10-31 2012-06-20 冠捷显示科技(厦门)有限公司 一种3d虚拟投影及虚拟触摸的用户交互界面及实现方法
CN102769802A (zh) * 2012-06-11 2012-11-07 西安交通大学 一种智能电视机的人机交互系统及其交互方法
CN103529947A (zh) * 2013-10-31 2014-01-22 京东方科技集团股份有限公司 显示装置及其控制方法、手势识别方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2296400A (en) * 1994-12-16 1996-06-26 Sharp Kk Autostereoscopic display having a high resolution 2D mode
JP2004334590A (ja) * 2003-05-08 2004-11-25 Denso Corp 操作入力装置
DE102005017313A1 (de) * 2005-04-14 2006-10-19 Volkswagen Ag Verfahren zur Darstellung von Informationen in einem Verkehrsmittel und Kombiinstrument für ein Kraftfahrzeug
EP2372512A1 (en) * 2010-03-30 2011-10-05 Harman Becker Automotive Systems GmbH Vehicle user interface unit for a vehicle electronic device
US20110310003A1 (en) * 2010-05-21 2011-12-22 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Image display device and method of displaying images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101655739A (zh) * 2008-08-22 2010-02-24 原创奈米科技股份有限公司 一种三次元虚拟输入与仿真的装置
CN101465957A (zh) * 2008-12-30 2009-06-24 应旭峰 一种虚拟三维场景中实现遥控互动的系统
CN102508546A (zh) * 2011-10-31 2012-06-20 冠捷显示科技(厦门)有限公司 一种3d虚拟投影及虚拟触摸的用户交互界面及实现方法
CN102769802A (zh) * 2012-06-11 2012-11-07 西安交通大学 一种智能电视机的人机交互系统及其交互方法
CN103529947A (zh) * 2013-10-31 2014-01-22 京东方科技集团股份有限公司 显示装置及其控制方法、手势识别方法

Also Published As

Publication number Publication date
CN103529947A (zh) 2014-01-22
US20160041616A1 (en) 2016-02-11

Similar Documents

Publication Publication Date Title
WO2015062247A1 (zh) 显示装置及其控制方法、手势识别方法、和头戴显示装置
JP6480434B2 (ja) デジタルデバイスとの対話のための直接的なポインティング検出のためのシステムおよび方法
WO2015062251A1 (zh) 显示装置及其控制方法、和手势识别方法
JP4900741B2 (ja) 画像認識装置および操作判定方法並びにプログラム
RU2455676C2 (ru) Способ управления устройством с помощью жестов и 3d-сенсор для его осуществления
KR101074940B1 (ko) 입체이미지 상호 연동 시스템
US20180136466A1 (en) Glass type terminal and control method therefor
US20150035752A1 (en) Image processing apparatus and method, and program therefor
US20110304650A1 (en) Gesture-Based Human Machine Interface
US20120056989A1 (en) Image recognition apparatus, operation determining method and program
US20130154913A1 (en) Systems and methods for a gaze and gesture interface
CN106919294B (zh) 一种3d触控交互装置、其触控交互方法及显示装置
JP5114795B2 (ja) 画像認識装置および操作判定方法並びにプログラム
WO2015062248A1 (zh) 显示装置及其控制方法、和手势识别方法
WO2015027574A1 (zh) 3d眼镜、3d显示系统及3d显示方法
KR20140107229A (ko) 3차원으로 디스플레이된 오브젝트의 사용자 선택 제스쳐에 응답하기 위한 방법 및 시스템
US20150341626A1 (en) 3d display device and method for controlling the same
WO2020019548A1 (zh) 基于人眼跟踪的裸眼3d显示方法、装置、设备和介质
CN103176605A (zh) 一种手势识别控制装置及控制方法
JP2012238293A (ja) 入力装置
WO2013149475A1 (zh) 一种用户界面的控制方法及装置
CN106327583A (zh) 一种实现全景摄像的虚拟现实设备及其实现方法
WO2018161564A1 (zh) 手势识别系统、方法及显示设备
TW202018486A (zh) 多螢幕操作方法與使用此方法的電子系統
CN111176425A (zh) 多屏幕操作方法与使用此方法的电子系统

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14426012

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14857941

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 07/07/2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14857941

Country of ref document: EP

Kind code of ref document: A1