WO2019237499A1 - 基于眼球追踪的三维图像显示装置及其实现方法 - Google Patents

基于眼球追踪的三维图像显示装置及其实现方法 Download PDF

Info

Publication number
WO2019237499A1
WO2019237499A1 PCT/CN2018/101061 CN2018101061W WO2019237499A1 WO 2019237499 A1 WO2019237499 A1 WO 2019237499A1 CN 2018101061 W CN2018101061 W CN 2018101061W WO 2019237499 A1 WO2019237499 A1 WO 2019237499A1
Authority
WO
WIPO (PCT)
Prior art keywords
time period
user
dimensional image
photos
eyeball
Prior art date
Application number
PCT/CN2018/101061
Other languages
English (en)
French (fr)
Inventor
李新福
Original Assignee
广东康云多维视觉智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东康云多维视觉智能科技有限公司 filed Critical 广东康云多维视觉智能科技有限公司
Publication of WO2019237499A1 publication Critical patent/WO2019237499A1/zh

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0442Handling or displaying different aspect ratios, or changing the aspect ratio

Definitions

  • the invention relates to the field of three-dimensional image display equipment, in particular to a three-dimensional image device based on eyeball tracking and an implementation method thereof.
  • such products mainly use a touch screen method for interaction. If the three-dimensional image device needs to be placed in a place that is difficult for people to touch, for example, it is placed at a high place or behind a fence, it cannot meet the needs of customer interaction.
  • an object of the present invention is to provide a three-dimensional image device based on eyeball tracking that can implement contactless operation and a method for implementing the same.
  • Three-dimensional image display device based on eyeball tracking including:
  • a display module configured to display one or more three-dimensional images, where the three-dimensional images include a three-dimensional model of at least one place and / or a three-dimensional model of at least one item;
  • a shooting module configured to take at least two photos of a user within a first time period
  • a processor configured to obtain a user's eyeball trajectory in the first time period according to at least two photos taken by the photographing module in the first time period, and change a state of the three-dimensional image according to the eyeball trajectory;
  • the changing the state of the three-dimensional image includes: changing an overall display angle, shape, size, and / or color of the three-dimensional image, and changing a partial display angle, shape, size, and / or color of the three-dimensional image.
  • the processor is specifically configured to obtain a user's eyeball trajectory in the first time period according to two of the at least two photos taken by the photographing module in the first time period.
  • the processor when the processor obtains the eye trajectory of the user in the first time period according to two of the at least two photos taken by the photographing module in the first time period, the processor starts from the at least two Select two of the two photos, then identify the first position coordinates of the user's eyeball from the first photo taken from the selected two photos, and identify the user's eyeball from the photos taken after the selected two photos A second position coordinate of the user, and finally subtracting the first position coordinate from the second position coordinate of the user's eyeball to obtain a vector, and the vector is used as the eyeball track of the user in the first time period.
  • the shooting module is further configured to take multiple photos of the user within a second time period
  • the processor is further configured to obtain the number of blinks of the user in the second time period according to multiple photos taken by the photographing module in the second time period, and change the state of the three-dimensional image according to the number of blinks.
  • a memory is included, and the memory is configured to store the one or more three-dimensional images, and the memory may be a local memory or a memory deployed on a remote server.
  • a method for implementing a three-dimensional image display device based on eyeball tracking includes the following steps:
  • the three-dimensional images include a three-dimensional model of at least one place and / or a three-dimensional model of at least one item;
  • the changing the state of the three-dimensional image includes: changing an overall display angle, shape, size, and / or color of the three-dimensional image, and changing a partial display angle, shape, size, and / or color of the three-dimensional image.
  • the step of obtaining, by the processor, the eye trajectory of the user in the first time period according to at least two photos taken by the shooting module in the first time period is specifically:
  • the processor obtains the eyeball track of the user in the first time period according to two photos of the at least two photos taken by the shooting module in the first time period.
  • the step of obtaining, by the processor, the eye track of the user in the first time period according to two of the at least two photos taken by the shooting module in the first time period includes:
  • a vector is subtracted from the second position coordinate of the user's eyeball to obtain a vector, and the vector is used as the eyeball track of the user in the first time period.
  • the number of blinks of the user in the second time period is obtained by the processor according to multiple photos taken by the shooting module in the second time period, and the state of the three-dimensional image is changed according to the number of blinks.
  • the one or more three-dimensional images are stored through a memory, which may be a local memory or a memory deployed on a remote server.
  • the beneficial effect of the present invention is that the present invention recognizes the eyeball trajectory of the user by taking multiple photos of the user, and changes the state of the three-dimensional image according to the eyeball trajectory, realizes contactless interaction, and makes the display module of the three-dimensional image applicable On more different occasions.
  • FIG. 1 is a block diagram of a three-dimensional image display device based on eye tracking according to a specific embodiment of the present invention
  • FIG. 2 is a flowchart of a method for implementing a three-dimensional image display device based on eye tracking according to a specific embodiment of the present invention.
  • first, second, third, etc. may be used in the present disclosure to describe various elements, these elements should not be limited to these terms. These terms are only used to distinguish elements of the same type from each other.
  • a first element may also be referred to as a second element, and similarly, a second element may also be referred to as a first element.
  • the use of any and all examples or exemplary languages ("eg,” "such as,” etc. provided herein is intended merely to better illuminate embodiments of the invention and does not impose a limitation on the scope of the invention unless otherwise required .
  • this embodiment discloses a three-dimensional image display device based on eye tracking.
  • the three-dimensional image display device includes a display module 101, a photographing module 102, and a processor 103.
  • the display module 101 is configured to display one or more three-dimensional images, where the three-dimensional images include a three-dimensional model of at least one place and / or a three-dimensional model of at least one item.
  • the display module 101 may be a display module such as a display screen, a projector, or an air imaging device.
  • the three-dimensional image may be a three-dimensional model of an exhibition hall, and the exhibition hall may contain three-dimensional models of one or more items. For example, the user can see through the display module 101 that several items are placed in an exhibition hall.
  • the three-dimensional image may be a three-dimensional model of a single item, such as a three-dimensional model of a car.
  • the shooting module 102 is configured to take at least two photos of the user in a first time period.
  • the first time period can be set according to actual needs, for example, it can be set to 0.1 seconds, 0.5 seconds, 1 second, etc.
  • the shooting module 102 should take at least two photos, for example, it should take two photos in 0.1 seconds. Photos, or 15 photos in 0.5 seconds.
  • the shooting module may be a camera or a video recorder.
  • the photo should include the image frame in the video, so shooting a video should also be considered as taking several photos.
  • Those skilled in the art can set the length of the first time period and the number of photos taken by the shooting module 102 during the first time period according to the selection of the shooting module 102 and the speed of change of the human eye.
  • the processor 103 is configured to obtain the eyeball track of the user in the first time period according to at least two photos taken by the shooting module 102 in the first time period.
  • the processor 103 may describe the eyeball trajectory of the user in the first time period according to all photos taken in the first time period, that is, sequentially identify the eyeball positions in all photos, and sequentially according to time. Sequentially draw a curve in the same coordinates, and use this curve as the eyeball trajectory.
  • the processor 103 may trace the eyeball trajectory of the user in the first time period according to a part of the photos taken during the first time period, where the part of the photos may be randomly selected from all the photos or there may be Regular extraction.
  • the photographing module 102 takes photos with numbers from 1 to 10.
  • Those skilled in the art can set a random function to extract at least two photos from ten photos. Those skilled in the art may also extract two photos with the numbers 1 and 10. Or a person skilled in the art can also extract all photos with even numbers.
  • the processor 103 is further configured to change a state of the three-dimensional image according to the eyeball trajectory.
  • the processor 103 may use the eyeball track in the first time period as an input signal of the user in the first time period, and the processor may change the state of the three-dimensional image according to the input signal.
  • the changing the state of the three-dimensional image includes: changing an overall display angle, shape, size, and / or color of the three-dimensional image, and changing a partial display angle, shape, size, and / or color of the three-dimensional image.
  • the three-dimensional image is an exhibition hall. When the user's eyeball moves to the right, the processor 103 can move the left (or right) perspective of the exhibition hall to the center of the display module 101.
  • the user can use the eyeball trajectory to change the perspective of a three-dimensional model of a product.
  • the product is a product that can change shape, such as a car that can open a door.
  • the user may open or close the door by using a specific eye track.
  • a specific eye track For example, those skilled in the art may use an “o” type eye track as an input signal to open or close the door.
  • the user can also enlarge or reduce the item through a specific eye track.
  • the size refers to the relative size of the three-dimensional image.
  • the user can change the color of the item through a specific eyeball trajectory, for example, moving the eyeball to the left or right can switch the color of the item.
  • the processor 103 may determine the object selected by the user for operation by identifying the specific eye track of the user. For example, the user may first move the object to be operated to the center of the display module. , And then determine a certain object in the 3D image as the object of operation through a specific eye track. When an object is selected as the object of operation, the processor 103 will only change the state of the object, that is, the present invention can target the 3D image. Part of the operation.
  • those skilled in the art may select a part of the photos taken in the first time period as the data processing target, but at least two should be selected photo. At the same time, if some photos fail to recognize the eyeball, a mechanism to replace the selected photos should be set to ensure the validity of the two selected photos.
  • those skilled in the art may select only two photos among the photos taken in the first time period. And in the same coordinate system, the position coordinates of the eyeballs in the two selected photos are identified, and the position coordinates of the eyeballs in the later photos are subtracted from the position coordinates of the eyeballs in the first photos to obtain a vector. As the eye track of the user in the first time period.
  • the processor 103 is further configured to obtain the number of blinks of the user in the second time period according to multiple photos taken by the photographing module 102 in the second time period, and change according to the number of blinks.
  • the processor 103 may sequentially identify eyeballs in a plurality of photos taken in the second time period, and mark the photos that have not been recognized, with one isolated unmarked photo or multiple consecutive photos that have not been marked.
  • the marked photos are counted, and the number of counted points is the number of blinks.
  • the isolated unmarked photo means that both the front and back photos of the unmarked photo are marked. For example, during the second time period, 10 photos are taken, with 1 being labeled and 0 being unlabeled.
  • the 10 photos are labeled 1010001011 respectively. It can be considered that the user blinked three times during the second time period.
  • the processor 103 may change the state of the three-dimensional image according to the number of blinks of the user in the second time period. By increasing the judgment of the number of blinks, it can cooperate with the eyeball trajectory to obtain more interactive ways. For example, the user can select an item in the three-dimensional image by blinking twice, and change the viewing angle of the three-dimensional image by eyeball trajectory.
  • the device may further include a memory 104 for storing the one or more three-dimensional images shown, and the memory 104 may be a local memory such as a hard disk or a U disk. It can also be deployed in remote server storage.
  • the local memory may be connected to the processor 103 and / or the display module 101 through an interface such as USB / SATA, and the memory deployed in the remote server may be connected to the processor 103 or the display module 101 through the Internet.
  • Those skilled in the art can flexibly store the one or more three-dimensional images in different locations according to actual needs.
  • this embodiment provides a method for implementing a three-dimensional image display device based on eye tracking shown in FIG. 1.
  • the device includes a display module 101, a photographing module 102, and a processor 103.
  • the method of this embodiment includes the following steps:
  • the three-dimensional image includes a three-dimensional model of at least one place and / or a three-dimensional model of at least one item.
  • the three-dimensional image may be a three-dimensional model of an exhibition hall, and the exhibition hall may contain three-dimensional models of one or more items.
  • the user can see through the display module 101 that several items are placed in an exhibition hall.
  • the three-dimensional image may be a three-dimensional model of a single item, such as a three-dimensional model of a car.
  • the first time period can be set according to actual needs, for example, it can be set to 0.1 seconds, 0.5 seconds, 1 second, etc.
  • the shooting module 102 should take at least two photos, for example, it should take two photos in 0.1 seconds. Photos, or 15 photos in 0.5 seconds.
  • the shooting module may be a camera or a video recorder.
  • the photo should include the image frame in the video, so shooting a video should also be considered as taking several photos.
  • Those skilled in the art can set the length of the first time period and the number of photos taken by the shooting module 102 during the first time period according to the selection of the shooting module 102 and the speed of change of the human eye.
  • the processor 103 obtains the eye trajectory of the user in the first time period according to at least two photos taken by the shooting module 102 in the first time period.
  • the processor 103 may describe the eyeball trajectory of the user in the first time period according to all photos taken in the first time period, that is, sequentially identify the eyeball positions in all photos, and sequentially according to time. Sequentially draw a curve in the same coordinates, and use this curve as the eyeball trajectory.
  • the processor 103 may trace the eyeball trajectory of the user in the first time period according to a part of the photos taken during the first time period, where the part of the photos may be randomly selected from all the photos or there may be Regular extraction.
  • the photographing module 102 takes photos with numbers from 1 to 10.
  • Those skilled in the art can set a random function to extract at least two photos from ten photos. Those skilled in the art may also extract two photos with the numbers 1 and 10. Or a person skilled in the art can also extract all photos with even numbers.
  • the processor 103 changes a state of the three-dimensional image according to the eyeball track.
  • the changing the state of the three-dimensional image includes: changing an overall display angle, shape, size, and / or color of the three-dimensional image, and changing a partial display angle, shape, size, and / or color of the three-dimensional image.
  • the processor 103 may use the eyeball track in the first time period as an input signal of the user in the first time period, and the processor may change the state of the three-dimensional image according to the input signal.
  • the changing the state of the three-dimensional image includes: changing an overall display angle, shape, size, and / or color of the three-dimensional image, and changing a partial display angle, shape, size, and / or color of the three-dimensional image.
  • the three-dimensional image is an exhibition hall.
  • the processor 103 can move the left (or right) perspective of the exhibition hall to the center of the display module 101.
  • the user can use the eyeball trajectory To change the perspective of a three-dimensional model of a product.
  • the product is a product that can change shape, such as a car that can open a door.
  • the user may open or close the door by using a specific eye track.
  • those skilled in the art may use an “o” type eye track as an input signal to open or close the door.
  • the user can also enlarge or reduce the item through a specific eye track.
  • the size refers to the relative size of the three-dimensional image.
  • the user can change the color of the item through a specific eyeball trajectory, for example, moving the eyeball to the left or right can switch the color of the item.
  • the processor 103 may determine the object selected by the user for operation by identifying the specific eye track of the user. For example, the user may first move the object to be operated to the center of the display module. , And then determine a certain object in the 3D image as the object of operation through a specific eye track. When an object is selected as the object of operation, the processor 103 will only change the state of the object, that is, the present invention can target the 3D image. Part of the operation.
  • those skilled in the art may select a part of the photos taken in the first time period as the object of data processing in step S203, but at least Select two photos. At the same time, if some photos fail to recognize the eyeball, a mechanism to replace the selected photos should be set to ensure the validity of the two selected photos.
  • those skilled in the art select only two photos from the photos that can be taken in the first time period in step S203. And in the same coordinate system, the position coordinates of the eyeballs in the two selected photos are identified, and the position coordinates of the eyeballs in the later photos are subtracted from the position coordinates of the eyeballs in the first photos to obtain a vector. As the eye track of the user in the first time period.
  • the method further includes step S205: obtaining, by the processor 103, the number of blinks of the user in the second time period according to multiple photos taken by the shooting module 102 in the second time period, and according to the number of blinks Change the state of the 3D image.
  • the processor 103 may sequentially identify eyeballs in a plurality of photos taken in the second time period, and mark the photos that have not been recognized, with one isolated unmarked photo or multiple consecutive photos that have not been marked.
  • the marked photos are counted, and the number of counted points is the number of blinks.
  • the isolated unmarked photo means that both the front and back photos of the unmarked photo are marked.
  • the processor 103 may change the state of the three-dimensional image according to the number of blinks of the user in the second time period. By increasing the judgment of the number of blinks, it can cooperate with the eyeball trajectory to obtain more interactive ways. For example, the user can select an item in the three-dimensional image by blinking twice, and change the viewing angle of the three-dimensional image by eyeball trajectory.
  • step S201 it is further provided: S200.
  • the one or more three-dimensional images are stored through a memory 104.
  • the memory 104 may be a local memory or a memory deployed on a remote server. Those skilled in the art may store the one or more three-dimensional images in different locations according to actual needs.
  • embodiments of the present invention may be implemented or implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer-readable memory.
  • the method can be implemented in a computer program using standard programming techniques, including non-transitory computer-readable storage media configured with a computer program, where the storage media so configured enable the computer to operate in a specific and predefined manner-- Methods and drawings described in the examples.
  • Each program can be implemented in a high-level process or an object-oriented programming language to communicate with a computer system. However, if required, the program can be implemented in assembly or machine language. In any case, the language can be a compiled or interpreted language. In addition, the program can be run on a programmed application specific integrated circuit for this purpose.
  • the processes (or variations and / or combinations thereof) described herein may be executed under the control of one or more computer systems configured with executable instructions and may be executed as code (e.g., , Executable instructions, one or more computer programs or one or more applications), implemented by hardware or a combination thereof.
  • the computer program includes a plurality of instructions executable by one or more processors.
  • the method can be implemented in any type of computing platform operably connected to, including but not limited to a personal computer, minicomputer, mainframe, workstation, network or distributed computing environment, separate or integrated computer Platform, or communicating with a charged particle tool or other imaging device, etc.
  • Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and / or write storage medium, RAM, ROM, etc., so that it can be read by a programmable computer, and when the storage medium or device is read by the computer, it can be used to configure and operate the computer to perform the processes described herein.
  • machine-readable code may be transmitted over a wired or wireless network.
  • machine-readable code may be transmitted over a wired or wireless network.
  • machine-readable code includes instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor
  • the invention described herein includes these and other different types of non-transitory computer-readable storage media.
  • the invention also includes the computer itself.
  • a computer program can be applied to the input data to perform the functions described herein, thereby transforming the input data to generate output data stored in a non-volatile memory.
  • the output information can also be applied to one or more output devices such as a display.
  • the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects generated on a display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于眼球追踪的三维图像显示装置及其实现方法,该装置包括:一显示模块,用于显示一个或多个三维图像,所述三维图像中包括至少一个场所的三维模型和/或至少一个物品的三维模型;一拍摄模块,用于在一第一时间周期内拍摄用户的至少两张照片;一处理器,用于根据拍摄模块在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹,并根据所述眼球轨迹改变三维图像的状态;本发明通过拍摄用户的多张照片来识别用户的眼球轨迹,并根据眼球轨迹来改变三维图像的状态,实现了无接触式的交互,使得三维图像的显示模块可以应用在更多不同的场合中。本发明可以广泛应用于三维图像显示设备领域。

Description

基于眼球追踪的三维图像显示装置及其实现方法
技术领域
本发明涉及三维图像显示设备领域,尤其是一种基于眼球追踪的三维图像装置及其实现方法。
背景技术
随着三维图像技术的发展,越来越多的三维图像应用在我们的生活中。现在的产品制造商可以通过三维扫描设备将他们的产品扫描成三维模型来向顾客展示,例如汽车厂商可以将一个汽车扫描成三维模型,顾客可以通过一些电子设备从不同的角度来浏览产品。因此商家并不需要担心店铺没有足够的面积来展示自己的产品,他们可以通过在店铺或者其展示场地放置一些三维图像设备来向用户展示其产品的三维模型。
目前这类产品的主要以触屏法的方式进行交互,如果三维图像设备需要放置在人难以触碰的地方,例如设置在高处或者放置在围栏后,则无法满足顾客交互的需求。
发明内容
为解决上述技术问题,本发明的目的在于:提供一种可以实现无接触操作的基于眼球追踪的三维图像装置及其实现方法。
本发明所采取的第一种技术方案是:
基于眼球追踪的三维图像显示装置,包括:
一显示模块,用于显示一个或多个三维图像,所述三维图像中包括至少一个场所的三维模型和/或至少一个物品的三维模型;
一拍摄模块,用于在一第一时间周期内拍摄用户的至少两张照片;
一处理器,用于根据拍摄模块在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹,并根据所述眼球轨迹改变三维图像的状态;
所述改变三维图像的状态包括:改变三维图像的整体的显示视角、形状、尺寸和/或颜色,和改变三维图像的局部的显示视角、形状、尺寸和/或颜色。
进一步,所述处理器具体用于:根据拍摄模块在所述第一时间周期内拍摄的至少两张照片中的两张照片得到用户在所述第一时间周期内的眼球轨迹。
进一步,所述处理器在根据拍摄模块在所述第一时间周期内拍摄的至少两张照片中的两张照片得到用户在所述第一时间周期内的眼球轨迹时,先从所述至少两张照片中选取其中的两张照片,然后从被选取的两张照片中先拍摄的照片中识别用户眼球的第一位置坐标,以及从被选取的两张照片中后拍摄的照片中识别用户眼球的第二位置坐标,最后将用户眼球的第二位置坐标减去第一位置坐标得到一矢量,以该矢量作为用户在所述第一时间周期内的眼球轨迹。
进一步,所述拍摄模块还用于在一第二时间周期内拍摄用户的多张照片;
所述处理器还用于根据拍摄模块在所述第二时间周期内拍摄的多张照片得到用户在所述第二时间周期内的眨眼次数,并根据所述眨眼次数改变三维图像的状态。
进一步,包括一存储器,所述存储器用于存储所述一个或多个三维图像,所述存储器可以是本地存储器或者是部署在远程服务器上的存储器。
本发明所采取的第二种技术方案是:
基于眼球追踪的三维图像显示装置的实现方法,包括以下步骤:
通过显示模块显示一个或多个三维图像,所述三维图像中包括至少一个场所的三维模型和/或至少一个物品的三维模型;
通过拍摄模块在一第一时间周期内拍摄用户的至少两张照片;
通过处理器根据拍摄模块在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹;
通过处理器根据所述眼球轨迹改变三维图像的状态;
所述改变三维图像的状态包括:改变三维图像的整体的显示视角、形状、尺寸和/或颜色,和改变三维图像的局部的显示视角、形状、尺寸和/或颜色。
进一步,所述通过处理器根据拍摄模块在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹的步骤具体为:
通过处理器根据拍摄模块在所述第一时间周期内拍摄的至少两张照片中的两张照片得到用户在所述第一时间周期内的眼球轨迹。
进一步,所述通过处理器根据拍摄模块在所述第一时间周期内拍摄的至少两张照片中的两张照片得到用户在所述第一时间周期内的眼球轨迹的步骤包括:
从所述至少两张照片中选取其中的两张照片,
从被选取的两张照片中先拍摄的照片中识别用户眼球的第一位置坐标,以及从被选取的两张照片中后拍摄的照片中识别用户眼球的第二位置坐标,
将用户眼球的第二位置坐标减去第一位置坐标得到一矢量,以该矢量作为用户在所述第一时间周期内的眼球轨迹。
进一步,还包括以下步骤:
通过所述拍摄模块还在一第二时间周期内拍摄用户的多张照片;
通过所述处理器根据拍摄模块在所述第二时间周期内拍摄的多张照片得到用户在所述第二时间周期内的眨眼次数,并根据所述眨眼次数改变三维图像的状态。
进一步,还包括以下步骤:
通过存储器存储所述一个或多个三维图像,所述存储器可以是本地存储器或者是部署在远程服务器上的存储器。
本发明的有益效果是:本发明通过拍摄用户的多张照片来识别用户的眼球轨迹,并根据眼球轨迹来改变三维图像的状态,实现了无接触式的交互,使得三维图像的显示模块可以应用在更多不同的场合中。
附图说明
图1为本发明一种具体实施例的基于眼球追踪的三维图像显示装置的模块框图;
图2为本发明一种具体实施例的基于眼球追踪的三维图像显示装置的实现方法的流程图。
具体实施方式
以下将结合实施例和附图对本发明的构思、具体结构及产生的技术效果进行清楚、完整的描述,以充分地理解本发明的目的、方案和效果。
需要说明的是,如无特殊说明,当某一特征被称为“固定”、“连接”在另一个特征,它可以直接固定、连接在另一个特征上,也可以间接地固定、连接在另一个特征上。此外,本公开中所使用的上、下、左、右等描述仅仅是相对于附图中本公开各组成部分的相互位置关系来说的。在本公开中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。此外,除非另有定义,本文所使用的所有的技术和科学术语与本技术领域的技术人员通常理解的含义相同。本文说明书中所使用的术语只是为了描述具体的实施例,而不是为了限制本发明。本文所使用的术语“和/或”包括一个或多个相关的所列项目的任意的组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种元件,但这些元件不应限于这些术语。这些术语仅用来将同一类型的元件彼此区分开。例如,在不脱离本公开范围的情况下,第一元件也可以被称为第二元件,类似地,第二元件也可以被称为第一元件。本文所提供的任何以及所有实例或示例性语言(“例如”、“如”等)的使用仅意图更好地说明本发明的实施例,并且除非另外要求,否则不会对本发明的范围施加限制。
参考图1,本实施例公开了一种基于眼球追踪的三维图像显示装置,该三维图像显示装置包括一显示模块101、一拍摄模块102和一处理器103。
其中,显示模块101用于显示一个或多个三维图像,所述三维图像中包括至少一个场所的三维模型和/或至少一个物品的三维模型。显示模块101可以是显示屏、投影仪或者空气成像设备等的显示模块。在一些实施例中,三维图像可以是一个展厅的三维模型,所述展厅中可以包含一个或者多个物品的三维模型,例如,用户可以通过显示模块101看到一展厅内放置了若干个物品。在另一些实施例张,三维图像可以是单个物品的三维模型,例如是一个汽车的三维模型。
拍摄模块102用于在一第一时间周期内拍摄用户的至少两张照片。第一时间周期可以根据实际需要进行设置,例如可以设置为0.1秒、0.5秒、1秒等,在第一时间周期内,拍摄模块102应当拍摄至少两张照片,例如其在0.1秒内拍摄两张照片,或者在0.5秒内拍摄15张照片。所述拍摄模块可以是相机或者录像机。此外,在本发明中照片应当包括录像中的图像帧,因此拍摄一段视频也应该认为是拍摄了若干张照片。本领域技术人员可以根据拍摄模块102的选型以及人眼变化的速度来设置第一时间周期的长度以及拍摄模块102在第一时间周期内拍摄的照片数量。
处理器103,用于根据拍摄模块102在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹。在一些实施例中,处理器103可以根据在所述第一时间周期内拍摄的所有照片来描绘用户在第一时间周期内的眼球轨迹,即依次识别所有照片中的眼球位置,并按照时间先后顺序在同一坐标中描绘出一曲线,并以该曲线作为眼球轨迹。在另一些实施例中,处理器103可以根据在所述第一时间周期内拍摄的部分照片来描绘用户在第一时间周期内的眼球轨迹,其中,部分照片可以从所有照片中随机抽取或者有规律地抽取,例如,在第一时间周期内拍摄模块102拍摄了编号为1至10的照片,本领域技术人员可以设定一随机函数,在十张照片中抽取至少两张照片。本领域技术人员也可以固定抽取编号为1和10的两张照片。或者本领域技术人员也可以抽取所有编号为偶数的照片。
处理器103还用于根据所述眼球轨迹改变三维图像的状态。处理器103可以将第一时间周期内的眼球轨迹作为用户在第一时间周期内的输入信号,处理器可以根据该输入信号来改变三维图像的状态。所述改变三维图像的状态包括:改变三维图像的整体的显示视角、形状、尺寸和/或颜色,和改变三维图像的局部的显示视角、形状、尺寸和/或颜色。在一些实施例中,三维图像是一个展厅,当用户的眼球往右移动时,处理器103可以将展厅左边(或者右边)的视角移动到显示模块101的中心,同理,用户可以通过眼球轨迹来改变一个商品的三维模型的视角。在一些实施例中,所述商品是可以改变形状的商品,例如一个可以打开车门的汽车。用户可以通过特定的眼球轨迹来实现打开车门或者关闭车门,例如,本领域技术人员可以将“o”型眼球轨迹作为打开车门或者关闭车门的输入信号。在一些实施例中,用户也可以通过特定的眼球轨迹来将物品进行放大或者缩小,在本发明中,尺寸是指三维图像的相对大小。在另一些实施例中,用户可以通过特定的眼球轨迹来改变物品的颜色,例如眼球向左或者向右运动可以切换物品的颜色。此外,在一些同时包含场所或者物品的三维图像中,处理器103可以通过识别用户的特定眼球轨迹来确定用户所选择操作的对象,例如,用户可以先将需要操作的对象移动到显示模块的中心,然后通过一个特定眼球轨迹来确定三维图像中的某个物品作为操作的对象,当选取某个物品作为操作的对象后,处理器103只会改变该物品的状态,即本发明可以针对三维图像中的局部进行操作。
作为优选的实施方式,本领域技术人员为了减少数据处理量,增加处理器103的处理速度,可以在第一时间周期内拍摄的照片中选取部分进行作为数据处理的对象,但是应当至少选取两张照片。同时,如果部分照片出现无法识别眼球的情况,应当设置替换选取的照片的机制,以保证所选取的两张照片的有效性。
作为优选的实施方式,本领域技术人员为了进一步减少数据处理量,增加处理器103的处理速度,可以在第一时间周期内拍摄的照片中,仅选取两张照片。并在同一坐标系中,识别出所选取的两张照片中眼球的位置坐标,将后拍摄的照片中眼球的位置坐标减去先拍摄的照片中眼球的位置坐标,以得到一个矢量,该矢量可以作为用户在第一时间周期内的眼球轨迹。
作为优选的实施方式,所述处理器103还用于根据拍摄模块102在第二时间周期内拍摄的多张照片得到用户在所述第二时间周期内的眨眼次数,并根据所述眨眼次数改变三维图像的状态。所述处理器103可以依次识别在第二时间周期内拍摄的多张照片中的眼球,并将没有识别到眼球的照片进行标记,以一张孤立的没有被标记的照片或者连续多张没有被标记的照片作为计数点,计数点的数量则为眨眼的次数。所述一张孤立的没有被标记的照片是指该没有被标记的照片前后两张照片均被标记了。例如在第二时间周期内拍摄了10张照片,以1表示被标记,0表示未标记,该10张照片分别被标记为1010001011,可以认为在第二时间周期内,用户进行了三次眨眼。处理器103可以根据用户在第二时间周期内的眨眼次数来改变三维图像的状态。通过增加眨眼次数的判断,可以与眼球轨迹配合来获得更多的交互方式。例如,用户可以通过眨眼两次来选取三维图像中的某个物品,通过眼球轨迹来改变三维图像的视角。
作为优选的实施方式,本装置还可以包括一存储器104,所述存储器104用于存储所示一个或多个三维图像,所述存储器104可以是硬盘或者U盘等本地存储器。也可以是部署在远程服务器中存储器。本地存储器可以通过如USB/SATA等接口与处理器103和/或显示模块101连接,部署在远程服务器中的存储器可以通过互联网与处理器103或者显示模块101连接。本领域技术人员可以根据实际需要将所述一个或者多个三维图像灵活地存储在不同的位置。
参照图2,本实施例提供了一种如图1所示的基于眼球追踪的三维图像显示装置的实现方法。该装置包括一显示模块101、一拍摄模块102和一处理器103。本实施例的方法包括以下步骤:
S201、通过显示模块101显示一个或多个三维图像。所述三维图像中包括至少一个场所的三维模型和/或至少一个物品的三维模型。在一些实施例中,三维图像可以是一个展厅的三维模型,所述展厅中可以包含一个或者多个物品的三维模型,例如,用户可以通过显示模块101看到一展厅内放置了若干个物品。在另一些实施例张,三维图像可以是单个物品的三维模型,例如是一个汽车的三维模型。
S202、通过拍摄模块102在一第一时间周期内拍摄用户的至少两张照片。第一时间周期可以根据实际需要进行设置,例如可以设置为0.1秒、0.5秒、1秒等,在第一时间周期内,拍摄模块102应当拍摄至少两张照片,例如其在0.1秒内拍摄两张照片,或者在0.5秒内拍摄15张照片。所述拍摄模块可以是相机或者录像机。此外,在本发明中照片应当包括录像中的图像帧,因此拍摄一段视频也应该认为是拍摄了若干张照片。本领域技术人员可以根据拍摄模块102的选型以及人眼变化的速度来设置第一时间周期的长度以及拍摄模块102在第一时间周期内拍摄的照片数量。
S203、通过处理器103根据拍摄模块102在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹。在一些实施例中,处理器103可以根据在所述第一时间周期内拍摄的所有照片来描绘用户在第一时间周期内的眼球轨迹,即依次识别所有照片中的眼球位置,并按照时间先后顺序在同一坐标中描绘出一曲线,并以该曲线作为眼球轨迹。在另一些实施例中,处理器103可以根据在所述第一时间周期内拍摄的部分照片来描绘用户在第一时间周期内的眼球轨迹,其中,部分照片可以从所有照片中随机抽取或者有规律地抽取,例如,在第一时间周期内拍摄模块102拍摄了编号为1至10的照片,本领域技术人员可以设定一随机函数,在十张照片中抽取至少两张照片。本领域技术人员也可以固定抽取编号为1和10的两张照片。或者本领域技术人员也可以抽取所有编号为偶数的照片。
S204、处理器103根据所述眼球轨迹改变三维图像的状态。所述改变三维图像的状态包括:改变三维图像的整体的显示视角、形状、尺寸和/或颜色,和改变三维图像的局部的显示视角、形状、尺寸和/或颜色。处理器103可以将第一时间周期内的眼球轨迹作为用户在第一时间周期内的输入信号,处理器可以根据该输入信号来改变三维图像的状态。所述改变三维图像的状态包括:改变三维图像的整体的显示视角、形状、尺寸和/或颜色,和改变三维图像的局部的显示视角、形状、尺寸和/或颜色。在一些实施例中,三维图像是一个展厅,当用户的眼球往右移动时,处理器103可以将展厅左边(或者右边)的视角移动到显示模块101的中心,同理,用户可以通过眼球轨迹来改变一个商品的三维模型的视角。在一些实施例中,所述商品是可以改变形状的商品,例如一个可以打开车门的汽车。用户可以通过特定的眼球轨迹来实现打开车门或者关闭车门,例如,本领域技术人员可以将“o”型眼球轨迹作为打开车门或者关闭车门的输入信号。在一些实施例中,用户也可以通过特定的眼球轨迹来将物品进行放大或者缩小,在本发明中,尺寸是指三维图像的相对大小。在另一些实施例中,用户可以通过特定的眼球轨迹来改变物品的颜色,例如眼球向左或者向右运动可以切换物品的颜色。此外,在一些同时包含场所或者物品的三维图像中,处理器103可以通过识别用户的特定眼球轨迹来确定用户所选择操作的对象,例如,用户可以先将需要操作的对象移动到显示模块的中心,然后通过一个特定眼球轨迹来确定三维图像中的某个物品作为操作的对象,当选取某个物品作为操作的对象后,处理器103只会改变该物品的状态,即本发明可以针对三维图像中的局部进行操作。
作为优选的实施方式,本领域技术人员为了减少数据处理量,增加处理器103的处理速度,在步骤S203可以在第一时间周期内拍摄的照片中选取部分进行作为数据处理的对象,但是应当至少选取两张照片。同时,如果部分照片出现无法识别眼球的情况,应当设置替换选取的照片的机制,以保证所选取的两张照片的有效性。
作为优选的实施方式,本领域技术人员为了进一步减少数据处理量,增加处理器103的处理速度,在步骤S203可以在第一时间周期内拍摄的照片中,仅选取两张照片。并在同一坐标系中,识别出所选取的两张照片中眼球的位置坐标,将后拍摄的照片中眼球的位置坐标减去先拍摄的照片中眼球的位置坐标,以得到一个矢量,该矢量可以作为用户在第一时间周期内的眼球轨迹。
作为优选的实施方式,还包括步骤S205、通过处理器103根据拍摄模块102在第二时间周期内拍摄的多张照片得到用户在所述第二时间周期内的眨眼次数,并根据所述眨眼次数改变三维图像的状态。所述处理器103可以依次识别在第二时间周期内拍摄的多张照片中的眼球,并将没有识别到眼球的照片进行标记,以一张孤立的没有被标记的照片或者连续多张没有被标记的照片作为计数点,计数点的数量则为眨眼的次数。所述一张孤立的没有被标记的照片是指该没有被标记的照片前后两张照片均被标记了。例如在第二时间周期内拍摄了10张照片,以1表示被标记,0表示未标记,该10张照片分别被标记为1010001011,可以认为在第二时间周期内,用户进行了三次眨眼。处理器103可以根据用户在第二时间周期内的眨眼次数来改变三维图像的状态。通过增加眨眼次数的判断,可以与眼球轨迹配合来获得更多的交互方式。例如,用户可以通过眨眼两次来选取三维图像中的某个物品,通过眼球轨迹来改变三维图像的视角。
作为优选的实施方式,步骤S201之前还设有:S200、通过存储器104存储所述一个或多个三维图像,所述存储器104可以是本地存储器或者是部署在远程服务器上的存储器。本领域技术人员可以根据实际需要将所述一个或者多个三维图像存储在不同的位置。
应当认识到,本发明的实施例可以由计算机硬件、硬件和软件的组合、或者通过存储在非暂时性计算机可读存储器中的计算机指令来实现或实施。所述方法可以使用标准编程技术-包括配置有计算机程序的非暂时性计算机可读存储介质在计算机程序中实现,其中如此配置的存储介质使得计算机以特定和预定义的方式操作——根据在具体实施例中描述的方法和附图。每个程序可以以高级过程或面向对象的编程语言来实现以与计算机系统通信。然而,若需要,该程序可以以汇编或机器语言实现。在任何情况下,该语言可以是编译或解释的语言。此外,为此目的该程序能够在编程的专用集成电路上运行。
此外,可按任何合适的顺序来执行本文描述的过程的操作,除非本文另外指示或以其他方式明显地与上下文矛盾。本文描述的过程(或变型和/或其组合)可在配置有可执行指令的一个或多个计算机系统的控制下执行,并且可作为共同地在一个或多个处理器上执行的代码(例如,可执行指令、一个或多个计算机程序或一个或多个应用)、由硬件或其组合来实现。所述计算机程序包括可由一个或多个处理器执行的多个指令。
进一步,所述方法可以在可操作地连接至合适的任何类型的计算平台中实现,包括但不限于个人电脑、迷你计算机、主框架、工作站、网络或分布式计算环境、单独的或集成的计算机平台、或者与带电粒子工具或其它成像装置通信等等。本发明的各方面可以以存储在非暂时性存储介质或设备上的机器可读代码来实现,无论是可移动的还是集成至计算平台,如硬盘、光学读取和/或写入存储介质、RAM、ROM等,使得其可由可编程计算机读取,当存储介质或设备由计算机读取时可用于配置和操作计算机以执行在此所描述的过程。此外,机器可读代码,或其部分可以通过有线或无线网络传输。当此类媒体包括结合微处理器或其他数据处理器实现上文所述步骤的指令或程序时,本文所述的发明包括这些和其他不同类型的非暂时性计算机可读存储介质。当根据本发明所述的方法和技术编程时,本发明还包括计算机本身。
计算机程序能够应用于输入数据以执行本文所述的功能,从而转换输入数据以生成存储至非易失性存储器的输出数据。输出信息还可以应用于一个或多个输出设备如显示器。在本发明优选的实施例中,转换的数据表示物理和有形的对象,包括显示器上产生的物理和有形对象的特定视觉描绘。
以上所述,只是本发明的较佳实施例而已,本发明并不局限于上述实施方式,只要其以相同的手段达到本发明的技术效果,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。在本发明的保护范围内其技术方案和/或实施方式可以有各种不同的修改和变化。

Claims (10)

  1. 基于眼球追踪的三维图像显示装置,其特征在于,包括:
    一显示模块,用于显示一个或多个三维图像,所述三维图像中包括至少一个场所的三维模型和/或至少一个物品的三维模型;
    一拍摄模块,用于在一第一时间周期内拍摄用户的至少两张照片;
    一处理器,用于根据拍摄模块在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹,并根据所述眼球轨迹改变三维图像的状态;
    所述改变三维图像的状态包括:改变三维图像的整体的显示视角、形状、尺寸和/或颜色,和改变三维图像的局部的显示视角、形状、尺寸和/或颜色。
  2. 根据权利要求1所述的基于眼球追踪的三维图像显示装置,其特征在于:所述处理器具体用于:根据拍摄模块在所述第一时间周期内拍摄的至少两张照片中的两张照片得到用户在所述第一时间周期内的眼球轨迹。
  3. 根据权利要求2所述的基于眼球追踪的三维图像显示装置,其特征在于:所述处理器在根据拍摄模块在所述第一时间周期内拍摄的至少两张照片中的两张照片得到用户在所述第一时间周期内的眼球轨迹时,先从所述至少两张照片中选取其中的两张照片,然后从被选取的两张照片中先拍摄的照片中识别用户眼球的第一位置坐标,以及从被选取的两张照片中后拍摄的照片中识别用户眼球的第二位置坐标,最后将用户眼球的第二位置坐标减去第一位置坐标得到一矢量,以该矢量作为用户在所述第一时间周期内的眼球轨迹。
  4. 根据权利要求1至3任一项所述的基于眼球追踪的三维图像显示装置,其特征在于:所述拍摄模块还用于在一第二时间周期内拍摄用户的多张照片;
    所述处理器还用于根据拍摄模块在所述第二时间周期内拍摄的多张照片得到用户在所述第二时间周期内的眨眼次数,并根据所述眨眼次数改变三维图像的状态。
  5. 根据权利要求1至3任一项所述的基于眼球追踪的三维图像显示装置,其特征在于:包括一存储器,所述存储器用于存储所述一个或多个三维图像,所述存储器可以是本地存储器或者是部署在远程服务器上的存储器。
  6. 基于眼球追踪的三维图像显示装置的实现方法,其特征在于:包括以下步骤:
    通过显示模块显示一个或多个三维图像,所述三维图像中包括至少一个场所的三维模型和/或至少一个物品的三维模型;
    通过拍摄模块在一第一时间周期内拍摄用户的至少两张照片;
    通过处理器根据拍摄模块在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹;
    通过处理器根据所述眼球轨迹改变三维图像的状态;
    所述改变三维图像的状态包括:改变三维图像的整体的显示视角、形状、尺寸和/或颜色,和改变三维图像的局部的显示视角、形状、尺寸和/或颜色。
  7. 根据权利要求6所述的基于眼球追踪的三维图像显示装置的实现方法,其特征在于:所述通过处理器根据拍摄模块在所述第一时间周期内所拍摄的至少两张照片得到用户在所述第一时间周期内的眼球轨迹的步骤具体为:
    通过处理器根据拍摄模块在所述第一时间周期内拍摄的至少两张照片中的两张照片得到用户在所述第一时间周期内的眼球轨迹。
  8. 根据权利要求7所述的基于眼球追踪的三维图像显示装置的实现方法,其特征在于:所述通过处理器根据拍摄模块在所述第一时间周期内拍摄的至少两张照片中的两张照片得到用户在所述第一时间周期内的眼球轨迹的步骤包括:
    从所述至少两张照片中选取其中的两张照片,
    从被选取的两张照片中先拍摄的照片中识别用户眼球的第一位置坐标,以及从被选取的两张照片中后拍摄的照片中识别用户眼球的第二位置坐标,
    将用户眼球的第二位置坐标减去第一位置坐标得到一矢量,以该矢量作为用户在所述第一时间周期内的眼球轨迹。
  9. 根据权利要求6-8任一项所述的基于眼球追踪的三维图像显示装置的实现方法,其特征在于:还包括以下步骤:
    通过所述拍摄模块还在一第二时间周期内拍摄用户的多张照片;
    通过所述处理器根据拍摄模块在所述第二时间周期内拍摄的多张照片得到用户在所述第二时间周期内的眨眼次数,并根据所述眨眼次数改变三维图像的状态。
  10. 根据权利要求6-8任一项所述的基于眼球追踪的三维图像显示装置的实现方法,其特征在于:还包括以下步骤:
    通过存储器存储所述一个或多个三维图像,所述存储器可以是本地存储器或者是部署在远程服务器上的存储器。
PCT/CN2018/101061 2018-06-15 2018-08-17 基于眼球追踪的三维图像显示装置及其实现方法 WO2019237499A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810623038.2A CN108962182A (zh) 2018-06-15 2018-06-15 基于眼球追踪的三维图像显示装置及其实现方法
CN201810623038.2 2018-06-15

Publications (1)

Publication Number Publication Date
WO2019237499A1 true WO2019237499A1 (zh) 2019-12-19

Family

ID=64489186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/101061 WO2019237499A1 (zh) 2018-06-15 2018-08-17 基于眼球追踪的三维图像显示装置及其实现方法

Country Status (2)

Country Link
CN (1) CN108962182A (zh)
WO (1) WO2019237499A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104391567A (zh) * 2014-09-30 2015-03-04 深圳市亿思达科技集团有限公司 一种基于人眼跟踪的三维全息虚拟物体显示控制方法
US20150070470A1 (en) * 2013-09-10 2015-03-12 Board Of Regents, The University Of Texas System Apparatus, System, and Method for Mobile, Low-Cost Headset for 3D Point of Gaze Estimation
CN105955471A (zh) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 虚拟现实交互的方法及装置
CN106407772A (zh) * 2016-08-25 2017-02-15 北京中科虹霸科技有限公司 适于虚拟现实设备的人机交互与身份认证装置及其方法
CN106462743A (zh) * 2014-05-09 2017-02-22 谷歌公司 用于向安全移动通信使用眼睛信号的系统和方法
CN106445173A (zh) * 2016-11-25 2017-02-22 四川赞星科技有限公司 一种目标体状态转换方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140055987A (ko) * 2012-10-31 2014-05-09 김대영 영상 표시 제어 방법
CN103336581A (zh) * 2013-07-30 2013-10-02 黄通兵 基于人体眼动特征设计的人机交互方法及系统
GB2558193B (en) * 2016-09-23 2022-07-20 Displaylink Uk Ltd Compositing an image for display
CN107562208A (zh) * 2017-09-27 2018-01-09 上海展扬通信技术有限公司 一种基于视觉的智能终端控制方法及智能终端控制系统
CN107885325B (zh) * 2017-10-23 2020-12-08 张家港康得新光电材料有限公司 一种基于人眼跟踪的裸眼3d显示方法及控制系统
CN108090463B (zh) * 2017-12-29 2021-10-26 腾讯科技(深圳)有限公司 对象控制方法、装置、存储介质和计算机设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150070470A1 (en) * 2013-09-10 2015-03-12 Board Of Regents, The University Of Texas System Apparatus, System, and Method for Mobile, Low-Cost Headset for 3D Point of Gaze Estimation
CN106462743A (zh) * 2014-05-09 2017-02-22 谷歌公司 用于向安全移动通信使用眼睛信号的系统和方法
CN104391567A (zh) * 2014-09-30 2015-03-04 深圳市亿思达科技集团有限公司 一种基于人眼跟踪的三维全息虚拟物体显示控制方法
CN105955471A (zh) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 虚拟现实交互的方法及装置
CN106407772A (zh) * 2016-08-25 2017-02-15 北京中科虹霸科技有限公司 适于虚拟现实设备的人机交互与身份认证装置及其方法
CN106445173A (zh) * 2016-11-25 2017-02-22 四川赞星科技有限公司 一种目标体状态转换方法及装置

Also Published As

Publication number Publication date
CN108962182A (zh) 2018-12-07

Similar Documents

Publication Publication Date Title
CN114727022B (zh) 在全向视频中的跟踪感兴趣对象
JP7488435B2 (ja) 位置合わせされたcadモデルを使用するar対応ラベル付け
JP5988225B2 (ja) モニタリング装置およびモニタリング方法
WO2017204596A1 (ko) 얼굴 윤곽 보정 방법 및 장치
WO2015143777A1 (zh) 一种基于人脸识别的广告分类匹配推送方法及系统
WO2019050360A1 (en) ELECTRONIC DEVICE AND METHOD FOR AUTOMATICALLY SEGMENTING TO BE HUMAN IN AN IMAGE
CN103688273B (zh) 辅助弱视用户进行图像拍摄和图像回顾
WO2019179411A1 (zh) 一种三维广告展示系统和方法
WO2018074821A1 (ko) 터치 유저 인터페이스를 이용한 카메라의 이동경로와 이동시간의 동기화를 위한 사용자 단말장치 및 컴퓨터 구현 방법
JP2007122400A (ja) 認証装置、プログラムおよび記録媒体
TWI420440B (zh) 物品展示系統及方法
KR101643917B1 (ko) 실사 영상 기반의 스마트 피팅장치
KR20130112578A (ko) 사용자 기반의 증강 현실 정보를 제공하기 위한 장치 및 방법
KR20220043004A (ko) 차폐된 이미지 검출 방법, 장치 및 매체
WO2017038035A1 (ja) 行動履歴情報生成装置、システム、及び方法
JP2014149716A (ja) 物体追跡装置及びその方法
WO2017164584A1 (ko) 제스처 기반의 사용자 인증이 가능한 hmd 장치 및 상기 hmd 장치의 제스처 기반의 사용자 인증 방법
US10540816B2 (en) Information display system
WO2019237499A1 (zh) 基于眼球追踪的三维图像显示装置及其实现方法
WO2015005102A1 (ja) 画像処理装置、画像処理方法、および画像処理プログラム
WO2023191371A1 (ko) 이미지 기반 바코드 인식 방법 및 시스템
WO2011145180A1 (ja) ポインタ情報処理装置、ポインタ情報処理プログラムおよび会議システム
WO2018192093A1 (zh) 场景建模方法及装置
US20230237696A1 (en) Display control apparatus, display control method, and recording medium
KR102103614B1 (ko) 전방 프로젝션 환경에서 그림자를 제거하기 위한 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18922162

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/03/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18922162

Country of ref document: EP

Kind code of ref document: A1