WO2014029229A1 - 一种显示控制方法、装置及终端 - Google Patents

一种显示控制方法、装置及终端 Download PDF

Info

Publication number
WO2014029229A1
WO2014029229A1 PCT/CN2013/077509 CN2013077509W WO2014029229A1 WO 2014029229 A1 WO2014029229 A1 WO 2014029229A1 CN 2013077509 W CN2013077509 W CN 2013077509W WO 2014029229 A1 WO2014029229 A1 WO 2014029229A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference object
coordinates
display control
user
facial image
Prior art date
Application number
PCT/CN2013/077509
Other languages
English (en)
French (fr)
Inventor
强威
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP13831638.5A priority Critical patent/EP2879020B1/en
Priority to US14/421,067 priority patent/US20150192990A1/en
Publication of WO2014029229A1 publication Critical patent/WO2014029229A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Definitions

  • the present invention relates to the field of terminal display applications, and in particular, to a display control method, apparatus, and terminal. Background technique
  • the main means is to control the content display of the terminal device through the keyboard, mouse or touch screen, such as front and rear or page up and down, font or picture zooming.
  • the present invention provides a display control method, apparatus and terminal.
  • the present invention provides a display control method, apparatus, and terminal for controlling terminal display based on facial actions.
  • the display control method provided by the present invention includes the following steps: Periodically obtaining a facial image of the user;
  • the operation is performed based on the reference object coordinates and the preset coordinates.
  • the reference object in the above embodiment is any point in the facial image; the preset coordinate is the spatial coordinate of the reference object when the user normally reads the current display content.
  • the step of performing an operation according to the reference object coordinate and the preset coordinate includes two implementation manners:
  • the operation is performed according to the change in the spatial position of the reference object in the preset time period.
  • the specific steps of calculating the reference object coordinates according to the face image include:
  • the summation operation of the X-axis and the Y-axis direction is performed on the inverse color map, respectively, and the peak coordinates of the X-axis and the Y-axis direction are obtained;
  • the coordinates of the reference object are determined based on the peak coordinates.
  • the present invention also provides a display control device based on a face image.
  • the display control device includes an acquisition module, a processing module, and an execution module.
  • a processing module configured to calculate a reference object coordinate according to the facial image calculation, compare the preset coordinate, and output the processing result to the execution module;
  • the execution module is configured to perform operations based on the processing results.
  • the present invention also provides a display control terminal; in one embodiment, the display control terminal includes a sensing device and a display device, and a display control device provided by the present invention; The control device periodically acquires a face image of the user through the sensing device, calculates a reference object coordinate according to the face image, and controls display of the display device according to the reference object coordinate and the preset value.
  • a technology for controlling terminal display based on a user's facial motion is provided.
  • the technology is based on the user's facial image for display control, completely liberating the user's hands;
  • the technique is based on the user's face.
  • the relative coordinates of the reference object are controlled, and the user can set the reference object according to actual needs, giving the user a variety of personalized selection possibilities; again, the technical working principle is simple, only based on the reference space position or the operation vector change
  • the control of the terminal display is realized, and the hardware requirement of the terminal device is low; finally, the technology can realize the control according to the change of the user's pupil position during reading, which is convenient and quick; in summary, by the implementation of the present invention, the terminal user does not
  • the control of the display content of the terminal can be realized only by the facial action through the keyboard, the mouse or the touch screen, thereby increasing the user experience.
  • FIG. 1 is a schematic structural diagram of a display control terminal 1 according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a display control device 12 according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural view of a processing module 122 in the display control device 12 in accordance with a preferred embodiment of the present invention
  • FIG. 4 is a flowchart of a display control method in an embodiment of the present invention.
  • FIG. 5a is a flowchart of a method for positioning a reference object according to an embodiment of the present invention.
  • FIG. 5b is a schematic diagram of a facial image according to an embodiment of the present invention.
  • 6a is a flowchart of a display control method in an embodiment of the present invention.
  • 6b is a schematic diagram showing a change in spatial position of a reference object according to an embodiment of the present invention.
  • Figure 6c is a schematic diagram showing the change of the spatial position of the reference object in an embodiment of the present invention
  • 6d is a schematic diagram showing a change in spatial position of a reference object in an embodiment of the present invention
  • FIG. 7a is a flowchart of a display control method according to an embodiment of the present invention.
  • FIG. 7b is a schematic diagram of a motion vector change of a reference object according to an embodiment of the present invention.
  • FIG. 7c is a schematic diagram of a motion vector change of a reference object according to an embodiment of the present invention.
  • 7d is a schematic diagram of a motion vector change of a reference object in an embodiment of the present invention.
  • FIG. 7e is a schematic diagram of a motion vector change of a reference object according to an embodiment of the present invention.
  • 7f is a schematic diagram of a motion vector change of a reference object in an embodiment of the present invention.
  • Figure 7g is a schematic diagram showing the change of motion vector of a reference object in an embodiment of the present invention. detailed description
  • the present invention provides a new display control technology, which monitors the face (or head) of the user in real time through the terminal device, and calculates the user's face.
  • the display of the terminal device is controlled by the position of the reference object and the position or the change of the operation vector between the set coordinates.
  • FIG. 1 is a schematic structural diagram of a display control terminal 1 according to an embodiment of the present invention.
  • the display control terminal 1 provided by the present invention includes a sensing device 11 configured to sense a user's face (or head) motion, a display device 13 configured to display content, and configured to control display.
  • the sensing device 11 includes, but is not limited to, a camera, an infrared sensing device, and other sensing-based sensing devices.
  • the display device 13 includes, but is not limited to, a mobile phone screen, a computer display, a display device of the projector, or an indoor/outdoor LED. Big screen, etc.
  • FIG. 2 is a block diagram showing the configuration of the display control device 12 in the display control terminal 1 shown in FIG. 1.
  • the display control device 12 included in the display control terminal 1 in the above embodiment includes: an obtaining module 121, a processing module 122, and an executing module 123;
  • the obtaining module 121 is configured to periodically acquire a facial image of the user, and transmit the acquired facial image to the processing module 122;
  • the processing module 122 is configured to calculate a reference object coordinate according to the facial image calculation, compare the preset coordinate with the preset coordinate, and output the processing result to the execution module 123;
  • the execution module 123 is configured to perform an operation according to the processing module 122 transmitting to the processing result.
  • the acquisition module 121 acquires the user's face image by the sensing device 11 in the display control terminal 1; the execution module 123 is used to control the display of the display device 13 in the display control terminal 1.
  • the reference object involved in the foregoing embodiment is any point in the facial image, such as any pupil, nose tip, or even a marker point on the face of the user; the preset coordinates are the current display of the user during normal reading.
  • the content, the spatial coordinates of the selected reference object it should be noted that when the terminal device outputs through the small screen, the spatial coordinate of the reference object may be only one coordinate point, but when the terminal device passes through the medium or large screen output.
  • the spatial coordinate of the reference object is a coordinate range, of course, the spatial coordinate type of the reference does not affect the implementation of the present invention.
  • FIG. 3 is a block diagram showing the structure of the processing module 122 in the display control device 12 shown in FIG. 2.
  • the processing module 122 of the display control device 12 shown in FIG. 2 may include a first processing unit 1221, a second processing unit 1222, a computing unit 1223, and a storage unit. 1224, where
  • the first processing unit 1221 is configured to calculate a motion vector of the reference object according to the reference object coordinate and the preset coordinate; and output a processing result according to a change of the reference object motion vector in the preset time period;
  • the second processing unit 1222 is configured to calculate a spatial position of the reference object according to the reference object coordinate and the preset coordinate; and output a processing result according to a change of the reference object space position in the preset time period;
  • the calculating unit 1223 is configured to: when the reference object is one or two of a user pupil center point in the facial image, acquire an RGB component image of the facial image, select a red component image; and each of the red component images The component value of the point is different from 255, and an inverse color map of the red component image is obtained; and the inverse sum of the X-axis and the Y-axis direction is respectively performed on the inverse color image, and the peak coordinates of the X-axis and the Y-axis direction are obtained. Determining the reference object coordinate according to the peak coordinate; of course, the calculating unit 1223 is mainly for realizing the positioning function of the reference object, and the positioning of the reference object can be realized by other positioning methods;
  • the storage unit 1224 is configured to store the current display content time of the user in the normal reading terminal device, the spatial coordinate point or the spatial coordinate movement range of the reference object; of course, it may also be configured to store the working record of the display control device 12, which is convenient for the user to perform. Operations such as calibration; meanwhile, when the computing unit 1223 is capable of implementing a data flash function, the functionality of the storage unit 1224 can also be implemented by the computing unit 1223.
  • the first processing unit 1221 and the second processing unit 1222 do not have to exist at the same time, and any processing unit can implement processing on the reference object coordinates and the preset coordinates, and the two processing units are only two types. Different data processing mechanisms.
  • Fig. 4 is a flow chart showing the display control method of the display control terminal 1 shown in Fig. 1 provided by the present invention in one embodiment.
  • the display control method provided by the present invention includes the following steps:
  • S403 processing the reference object coordinates and preset coordinates
  • S404 Perform an operation according to the processing result.
  • the reference object involved in the display control method shown in FIG. 4 is any point or multiple points in the facial image; the preset coordinate is when the user reads the current display content normally.
  • the spatial coordinates of the reference object is any point or multiple points in the facial image; the preset coordinate is when the user reads the current display content normally.
  • step S403 in the display control method shown in FIG. 4 includes at least two types, and the steps are as follows:
  • the operation is performed according to the change in the spatial position of the reference object in the preset time period.
  • step S402 in the display control method shown in FIG. 4 includes the following steps:
  • the summation operation of the X-axis and the Y-axis direction is performed on the inverse color map, respectively, and the peak coordinates of the X-axis and the Y-axis direction are obtained;
  • the coordinates of the reference object are determined based on the peak coordinates.
  • the implementation of the present invention mainly includes two aspects: the selection and positioning method of the reference object in the facial image and the analysis and control method of the facial motion;
  • FIG. 5a is a flowchart of a method for locating a reference object according to an embodiment of the present invention
  • FIG. 5b is a diagram of the present invention
  • the user can make a personalized selection (such as a nose tip, an eyebrow red dot, etc.) as needed.
  • a personalized selection such as a nose tip, an eyebrow red dot, etc.
  • the user can also use the default setting, and the default reference object is the user's pupil center point;
  • the display control technology provided by the present invention is described below with reference to an embodiment.
  • the following assumptions are made:
  • the display device of the display control terminal is a camera
  • the display device of the display control terminal is
  • the method for locating the reference object includes the following steps:
  • S501 acquiring a facial image of the user by using a camera
  • RGB ratio of the yellow skin (flesh) of the yellow race is R-255, G-204, B-102
  • the RGB ratio of the pupil center of the black eye (black) is R-0, G-0, B -0
  • visible red color difference is the most prominent, select the red component to calculate the minimum error, of course, you can also choose other color components for calculation, not repeating the description;
  • S504 Inverting the red component in the X-axis and Y-axis directions respectively: summing operation; as shown in Fig. 5b: The accumulation in the X-axis direction has two peaks P-XL and P-XR, respectively The center points of the left and right eye pupils, and there are also two peaks P YU and P-YD in the Y-axis direction, which are the eyebrows and the pupil center points, respectively; S505: determining reference object coordinates;
  • the comparison table can be queried according to the previously acquired user face image and RGB color (http://www.1141a. Com/other/rgb.htm, download an address of the webpage) to determine which component map in the RGB color map of the user's face image is selected for processing; the process of the selection mode can be easily implemented by those skilled in the art, and is not described.
  • the positioning method shown in FIG. 5a can be implemented by other reference object positioning methods, such as spatial coordinate positioning, polar coordinate positioning, infrared position positioning, and the like.
  • the daily operation actions of several users are agreed: the content of the screen is moved, and the head is slightly turned up and down, for example, the content of the next page of the current screen needs to be viewed, and the user only needs to bow down slightly to make an attempt to see the screen below.
  • the display control terminal provided by the present invention also supports the user to set a custom action, such as closing, etc.;
  • the coordinates of the reference object are three-bit space coordinates, and the preset coordinates are the space of the reference object when the user normally displays the content in the normal reading terminal device.
  • the coordinate movement range, the operation includes up and down page turning (moving), zooming, confirming and canceling 8 operations;
  • FIG. 6a is a flowchart of an embodiment of a display control method provided by the present invention. As can be seen from FIG. 6a, in one embodiment, the display control method provided by the present invention includes the following steps:
  • the spatial coordinate movement range of the reference object when the user currently displays the content in the normal reading terminal device is calculated;
  • S602 Periodically acquire a facial image of the user
  • the imaging device Since the imaging device has a sampling period, the period of acquiring the facial image of the user here defaults to the sampling period of the imaging device;
  • the preset time period size is the execution time T of the nodding or shaking action set by the user.
  • the preset time period size is the system default time length (statistically obtained average of human nods or shaking heads) Cycle time)
  • the starting time is the time when the spatial position of the reference object exceeds the preset coordinates
  • the sampling period of the terminal device is t
  • the execution time of the nod/shake head of the user setting is T
  • n sampling periods according to the time when the spatial position of the reference object exceeds the preset coordinate range.
  • step S606 Perform an operation according to the change obtained in step S605;
  • Operations include page up or down or left, or move, zoom, and cancel.
  • the change in the spatial position of the reference object obtained in step S605 may be a variation.
  • Line graph Now assume that the time period of the preset time period is 6 sampling periods; in this time period, the change graph of the spatial position of the reference object is shown in Fig. 6b and Fig. 6c, respectively, wherein Fig. 6b
  • the spatial position change shown represents the shaking head (ie, canceling the playground), and the spatial position change shown in Fig. 6c represents the operation of turning the page to the right; in Fig. 6b, Fig. 6c, 1, 2, 3, 4, 5, 6
  • the six numbers represent the position of the reference object in the corresponding user's face image, respectively.
  • the reference object moves back and forth in the three, zero, and four zones, which means shaking the head
  • the reference object moves in a plane parallel to the display surface.
  • the reference object can also move in a plane perpendicular to the display surface.
  • the coordinates calculate the spatial position and spatial position of the reference object; for example, the X axis represents the left and right direction, the Y axis represents the up and down direction, and the Z axis represents the front and rear direction.
  • the Z axis coordinate decreases to represent the reference object. Close to the screen, zoom in on the content, and conversely, zoom out.
  • the user can customize the operations represented by each action as needed, or customize the operation actions, such as visual perception correction (when the user first uses the device, they need to separately look at the four top corners of the screen, let the camera record The free distance of the spatial reference of the user reference object in the screen range, the horizontal/longitudinal maximum coordinate, and the screen position of the current operator's gaze lock, which facilitates the subsequent operation accuracy), and the screen content lock (when the current user reference is analyzed) After the object coordinates, a content lock cursor needs to appear on the screen to inform the user of the current eye-locked screen. Position, if the operator thinks the analysis is inaccurate, it can further make a visual perception correction until the position of the screen that accurately locks the gaze is locked).
  • visual perception correction when the user first uses the device, they need to separately look at the four top corners of the screen, let the camera record The free distance of the spatial reference of the user reference object in the screen range, the horizontal/longitudinal maximum coordinate, and the screen position of the current operator's gaze lock, which
  • FIG. 7a is a flowchart of another embodiment of the display control method provided by the present invention.
  • the display control method provided by the present invention includes the following steps:
  • step S701 calculating and recording preset coordinates; same as step S601;
  • S702 Periodically acquire a facial image of the user; the same as step S602;
  • step S703 calculating reference object coordinates; same as step S603;
  • S705 Calculate a change of a reference object motion vector in a preset time period
  • step S706 Perform an operation according to the change obtained in step S705.
  • step S705 obtains a change curve of the reference object motion vector; now it is assumed that the time length of the preset time period is 6 sampling periods; in this time period, a curve of the reference object motion vector is assumed 7b to 7g respectively (the size and direction of each arrow in the figure represent the motion vector of the reference object compared with the preset coordinates in the six facial images), wherein
  • the motion vector change shown in Fig. 7b represents a bottom view (i.e., moving up or turning pages); the motion vector change shown in Fig. 7c represents a right shift (i.e., moving to the right or turning a page); the motion vector variation shown in Fig. 7d Represents a top view (ie, moving down or turning pages); the motion vector change shown in Figure 7e represents a left shift (ie, moving to the left or turning a page); the motion vector change shown in Figure 7f represents a shaking head (ie, canceling or Negative operation);
  • the motion vector change shown in Figure 7g represents a nod (i.e., confirmation or affirmative operation).
  • the reference object moves in a plane parallel to the display surface.
  • the reference object can also move in a plane perpendicular to the display surface.
  • the coordinates calculate the spatial position and spatial position of the reference object; for example, the X axis represents the left and right direction, the Y axis represents the up and down direction, and the Z axis represents the front and rear direction.
  • the Z axis coordinate decreases to represent the reference object. Close to the screen, zoom in on the content, and conversely, zoom out.
  • the above two embodiments are only two preferred methods for obtaining the change of the position of the reference object.
  • the positional change of the reference object can also be obtained by other methods, such as image contrast method (ie: taking images of the same size, overlapping) Compare) and so on.
  • the technology is based on the user's facial image for display control.
  • the buttons, the touch screen, the mouse and even the gesture control technology are convenient for the user to use, completely liberating the user's hands;
  • the technology is controlled based on the relative coordinates of the reference object in the user's face.
  • the user can set the reference object according to actual needs, such as any pupil, nose tip, or even marking on the face, giving the user a variety of The possibility of personalized selection;
  • the working principle of the technology is simple, and the control of the terminal display is realized only according to the change of the reference space position or the operation vector, and the hardware requirement of the terminal device is low, so that the technology can be applied to daily life more widely;
  • the technology can be controlled according to the change of the user's pupil position during reading, which is convenient and quick;
  • the terminal user can control the display content of the terminal by using only a keyboard, a mouse or a touch screen through facial actions, and the user's use body is increased.
  • a keyboard a mouse or a touch screen
  • the user's use body is increased.
  • the present invention provides a display control method, device and terminal, wherein the display control device periodically acquires a facial image of a user through a sensing device, calculates a reference object coordinate according to the facial image, and controls according to the reference object coordinate and the preset value. Display of the display device.
  • the user can control the display content of the terminal only through the facial action through the keyboard, the mouse or the touch screen, which liberates the user's hands and increases the user's use experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

为了解决当前终端显示控制技术依赖于手动操作的问题,本发明提供了一种显示控制方法、装置及终端。该装置包括配置为周期性获取用户面部图像的获取模块,配置为根据面部图像计算获取参照物坐标并与预设坐标进行比较的处理模块,配置为根据处理结果执行操作的执行模块;该终端包括配置为感应用户面部图像的感应装置、配置为显示内容的显示装置及本发明提供的显示控制装置,显示控制装置通过感应装置周期性获取用户的面部图像,根据面部图像计算参照物坐标,并根据参照物坐标与预设值控制显示装置的显示。通过本发明的实施,用户不需要通过键盘、鼠标或触摸屏仅通过面部动作就可以实现对终端显示内容的控制,解放了用户的双手,增加了用户的使用体验。

Description

一种显示控制方法、 装置及终端 技术领域
本发明涉及终端显示应用领域, 尤其涉及一种显示控制方法、 装置及 终端。 背景技术
随着手机、 平板电脑等移动终端的普及, 移动终端的各种附加功能也 曰趋增多, 如摄像头、触摸屏等设备; 目前移动终端用户控制屏幕显示(如 电子书、 网页、 图片等的显示) 的主要手段是通过键盘、 鼠标或触摸屏控 制终端设备的内容显示、 如前后或上下翻页、 字体或图片的缩放等。
通过键盘、 鼠标或触摸屏控制终端显示的方式都存在一个明显的特点, 即, 至少需要用户使用手进行点击操作或手势控制; 但是, 在一些特殊情 况下(如用餐时), 用户由于双手都在使用而没有办法进行点击操作, 同时, 由于触摸屏是使用的接触式操作特点, 很多操作容易被误判, 如上下翻页 时所产生意外的单击被认为是确认、 取消或回退操作。
因此, 如何在现有技术的基础上, 提供一种全新的显示控制方法来解 放终端用户的双手, 是本领域技术人员亟待解决的问题。 发明内容
为了解决当前显示控制技术必须依赖于手动操作的问题, 本发明提供 了一种显示控制方法、 装置及终端。
为了实现本发明的目的, 本发明提供了一种基于面部动作来控制终端 显示的显示控制方法、 装置和终端; 在一个实施例中, 本发明提供的显示 控制方法, 包括以下步骤: 周期性获取用户的面部图像;
根据面部图像计算参照物坐标;
根据参照物坐标与预设坐标, 执行操作。
优选地, 上述实施例中的参照物为面部图像中的任一点; 预设坐标为 用户在正常阅读当前显示内容时, 参照物的空间坐标。
优选地, 在上述实施例中, 根据参照物坐标与预设坐标, 执行操作的 步骤, 具体包括两种实现方式:
根据参照物坐标与预设坐标, 计算参照物的运动矢量, 根据在预置时 间段内中参照物运动矢量的变化, 执行操作; 或者
根据参照物坐标与预设值, 确定参照物的空间位置;
根据在预置时间段内中参照物空间位置的变化, 执行操作。
优选地, 在上述所有实施例中, 当参照物为面部图像中的用户任一瞳 孔中心点时, 根据面部图像计算参照物坐标的具体步骤包括:
获取面部图像的 RGB分量图像, 选择红色分量图像;
将红色分量图像中各点的分量值与 255做差, 得到红色分量图像的反 色图;
分别对反色图进行 X轴和 Y轴方向的累加和运算,获取确定 X轴和 Y 轴方向的峰值坐标;
根据峰值坐标, 确定参照物的坐标。
本发明也提供了一种基于面部图像的显示控制装置, 在一个实施例中, 该显示控制装置包括获取模块、 处理模块和执行模块; 其中,
获取模块, 配置为周期性获取用户的面部图像;
处理模块, 配置为根据面部图像计算获取参照物坐标, 并与预设坐标 进行比较, 并向执行模块输出处理结果;
执行模块, 配置为根据处理结果执行操作。 同时, 为了应用本发明提供的显示控制技术, 本发明也提供了一种显 示控制终端; 在一个实施例中, 该显示控制终端包括感应装置和显示装置、 及本发明提供的显示控制装置; 显示控制装置通过感应装置周期性获取用 户的面部图像, 根据面部图像计算参照物坐标, 并根据参照物坐标与预设 值控制显示装置的显示。
通过本发明的实施, 提供了一种基于用户面部动作控制终端显示的技 术, 首先, 该技术是基于用户面部图像进行显示控制的, 彻底解放了用户 的双手; 其次, 该技术是基于用户面部中参照物的相对坐标进行控制的, 用户可以根据实际需要设置该参照物, 给予了用户的多样的个性化选择的 可能; 再次, 该技术工作原理简单, 仅仅根据参照物空间位置或运算矢量 的变化就实现了对终端显示的控制, 对终端设备的硬件要求低; 最后, 该 技术可以根据阅读时用户瞳孔位置的变化就可以实现控制, 方便快捷; 综 上, 通过本发明的实施, 终端用户不需要通过键盘、 鼠标或触摸屏仅通过 面部动作就可以实现对终端显示内容的控制, 增加了用户的使用体验。 附图说明
图 1为本发明实施例中显示控制终端 1的结构示意图;
图 2为本发明实施例中显示控制装置 12的结构示意图;
图 3为本发明较佳实施例中显示控制装置 12中处理模块 122的结构示 意图;
图 4为本发明实施例中显示控制方法的流程图;
图 5a为本发明一实施例中的参照物定位方法的流程图;
图 5b为本发明一实施例中的面部图像示意图;
图 6a为本发明一实施例中的显示控制方法的流程图;
图 6b为本发明一实施例中的参照物空间位置变化的示意图;
图 6 c为本发明一实施例中的参照物空间位置变化的示意图; 图 6d为本发明一实施例中的参照物空间位置变化的示意图;
图 7a为本发明一实施例中的显示控制方法的流程图;
图 7b为本发明一实施例中的参照物运动矢量变化的示意图;
图 7c为本发明一实施例中的参照物运动矢量变化的示意图;
图 7d为本发明一实施例中的参照物运动矢量变化的示意图;
图 7e为本发明一实施例中的参照物运动矢量变化的示意图;
图 7f为本发明一实施例中的参照物运动矢量变化的示意图;
图 7g为本发明一实施例中的参照物运动矢量变化的示意图。 具体实施方式
下面通过具体实施方式结合附图的方式对本发明做出进一步的诠释说 明。
为了解决现有的显示控制技术对用户双手的依赖, 本发明提供了一种 全新的显示控制技术, 该显示控制技术通过终端设备对用户的面部 (或头 部)动作进行实时监控, 计算用户面部参照物的当前坐标与设置坐标之间 的位置或运算矢量的变化, 来控制终端设备的显示。
图 1为本发明实施例中显示控制终端 1的结构示意图。
由图 1可知, 在一个实施例中, 本发明提供的显示控制终端 1 包括配 置为感知用户面部 (或头部)动作的感应装置 11, 配置为显示内容的显示 装置 13, 及配置为控制显示装置的显示内容的显示控制装置 12; 显示控制 装置 12通过感应装置 11获取用户的面部图像, 进行一系列处理(该处理 过程在下文进行详细的说明 )之后, 控制显示控制装置 12的显示。
在上述实施例中, 感应装置 11包括但不局限于摄像头、 红外感应装置 及基于其他感应的感应装置; 显示装置 13包括但不局限于手机屏幕、 电脑 显示器、 投影仪的显示布或室内外 LED大屏幕等。
图 2为图 1所示的显示控制终端 1中的显示控制装置 12的结构示意图。 从图 2可知, 在一个实施例中, 上述实施例中显示控制终端 1所包含 的显示控制装置 12包括:包括获取模块 121、处理模块 122和执行模块 123; 其中,
获取模块 121, 配置为周期性获取用户的面部图像, 并将获取到的面部 图像传输到处理模块 122;
处理模块 122, 配置为根据面部图像计算获取参照物坐标, 并与预设坐 标进行比较, 并向执行模块 123输出处理结果;
执行模块 123, 配置为根据处理模块 122传输到处理结果执行操作。 在上述实施例中, 获取模块 121是通过显示控制终端 1 中的感应装置 11来获取用户面部图像的; 执行模块 123则用来控制显示控制终端 1中的 显示装置 13的显示。
在一个实施例中, 上述实施例中涉及的参照物为面部图像中的任一点, 如用户的任一个瞳孔、 鼻尖、 甚至面部上的标记点都可以; 预设坐标为用 户在正常阅读当前显示内容时, 所选取的参照物的空间坐标, 需要说明的 是, 当终端设备通过小屏幕输出时, 该参照物的空间坐标可以仅仅是一个 坐标点, 但是, 当终端设备通过中型或大型屏幕输出时, 该参照物的空间 坐标就是一个坐标范围了, 当然, 该参照物的空间坐标类型并不影响本发 明的实施。
图 3为图 2所示的显示控制装置 12中处理模块 122的结构示意图。 从图 3可以看出, 在本发明的一个较佳实施例中, 图 2所示显示控制 装置 12中处理模块 122可以包括第一处理单元 1221、 第二处理单元 1222、 计算单元 1223和存储单元 1224, 其中,
第一处理单元 1221, 配置为根据所述参照物坐标与预设坐标, 计算参 照物的运动矢量; 并根据在预置时间段内中, 参照物运动矢量的变化, 输 出处理结果; 第二处理单元 1222, 配置为根据所述参照物坐标与预设坐标, 计算参 照物的空间位置; 并根据在预置时间段内中, 参照物空间位置的变化, 输 出处理结果;
计算单元 1223, 配置为当所述参照物为所述面部图像中用户瞳孔中心 点中的一个或两个时,获取面部图像的 RGB分量图像,选择红色分量图像; 将所述红色分量图像中各点的分量值与 255做差, 得到所述红色分量图像 的反色图; 分别对所述反色图进行 X轴和 Y轴方向的累加和运算, 获取确 定 X轴和 Y轴方向的峰值坐标;根据所述峰值坐标,确定所述参照物坐标; 当然, 该计算单元 1223主要是为了实现参照物的定位功能, 其可以通过其 他的定位方式实现参照物的定位;
存储单元 1224, 配置为存储用户在正常阅读终端设备的当前显示内容 时间, 参照物的空间坐标点或空间坐标移动范围; 当然, 也可以配置为存 储该显示控制装置 12的工作记录, 便于用户进行校准等操作; 同时, 当计 算单元 1223能够实现数据闪存功能时, 该存储单元 1224的功能也可以由 计算单元 1223实现。
在上述实施例中, 第一处理单元 1221和第二处理单元 1222并不是必 须同时存在的, 任一个处理单元都可以实现对参照物坐标和预设坐标的处 理, 两个处理单元仅仅是两种不同的数据处理机制而已。
图 4是本发明提供的应用图 1所示的显示控制终端 1的显示控制方法 在一个实施例中的流程图。
从图 4可知, 在一个实施例中, 本发明提供的显示控制方法包括以下 步骤:
S401 : 周期性获取用户的面部图像;
S402: 根据面部图像计算参照物坐标;
S403: 处理所述参照物坐标与预设坐标; S404: 根据处理结果, 执行操作。
在一个实施例中, 图 4所示的显示控制方法中涉及的所述参照物为所 述面部图像中的任一点或多点; 所述预设坐标为用户在正常阅读当前显示 内容时, 所述参照物的空间坐标。
在一个实施例中, 图 4所示的显示控制方法中的步骤 S403的实现方式 至少包括两种, 其步骤分别为:
根据参照物坐标与预设坐标, 计算参照物的运动矢量;
根据在预置时间段内中参照物运动矢量的变化, 执行操作;
或者
根据参照物坐标与预设值, 确定参照物的空间位置;
根据在预置时间段内中参照物空间位置的变化, 执行操作。
在一个实施例中, 当用户设定的参照物为用户的瞳孔时, 图 4所示的 显示控制方法中的步骤 S402的实现方式包括以下步骤:
获取面部图像的 RGB分量图像, 选择红色分量图像;
将红色分量图像中各点的分量值与 255做差, 得到红色分量图像的反 色图;
分别对反色图进行 X轴和 Y轴方向的累加和运算,获取确定 X轴和 Y 轴方向的峰值坐标;
根据峰值坐标, 确定参照物的坐标。
为了更好的说明本发明提供的显示控制技术, 在以下的实施例中, 结 合实际生活对做本发明做进一步的诠释说明。
本发明的实施主要包括两个方面: 面部图像中参照物的选取与定位方 法及面部动作的分析及控制方法; 下面分别进行说明。
面部图像中参照物的选取与定位方法:
图 5a为本发明一实施例中的参照物定位方法的流程图; 图 5b为本发 明一实施例中的面部图像示意图;
在选取参照物时, 用户可以根据需要进行个性化的选择(如鼻尖、 眉 心红点等标记之类), 当然, 用户也可以使用默认设置, 默认的参照物为用 户的瞳孔中心点;
下面以一实施例说明本发明提供的显示控制技术; 在该实施例中做以 下假设: 当用户为黄皮肤黑眼睛的黄种人、 显示控制终端的感应装置为摄 像头、 显示控制终端的显示装置为移动手机的显示屏、 用户使用显示控制 终端默认的参照物时, 由图 5a可知, 在一个实施例中, 参照物的定位方法 包括以下步骤:
S501 : 利用摄像头获取用户的面部图像;
适当调整摄像头的位置得到如说明书附图 5b所示的面部图像;
S502: 选择当前用户面部图像的 RGB彩图中红色分量图像 (R— img)作 为预处理数据;
由于黄种人的黄皮肤(肉色) 的 RGB配比为 R-255, G-204 , B-102 , 而黑眼睛的瞳孔中心 (为黑色) 的 RGB配比为 R-0, G-0, B-0, 可见红色 色差是最突出的, 选择红色分量进行计算误差最小, 当然也可以选择其他 颜色分量进行计算, 不在重复说明;
S503 : 将红色分量图像 R— img 与 255 做差, 得到红色分量反色图 R R img;
这样大部分面部图像的像素数据都是 0, 而瞳孔中心(和眉毛)则成为
255;
S504: 对红色分量反色图分别在 X轴和 Y轴方向^:累加和运算; 如附图 5b所示: 在 X轴方向的累加会有两个峰值 P— XL和 P— XR, 分 别是左右眼瞳孔中心点, 而在 Y轴方向也会有两个峰值 P YU和 P— YD, 分别是眉毛和瞳孔中心点; S505: 确定参照物坐标;
剔除眉毛(P— YU ) 的干扰(由于眉毛永远在眼睛上面), 因此只要保 留 Y轴坐标下面的峰值位置 P— YD即可, 此时就可以确定用户的左右瞳孔 中心点的坐标, 在按照用户选择或系统模块的一个瞳孔作为参照物, 记录 参照物的坐标即可。
针对其他各种皮肤和各种颜色眼睛组合的用户, 采用图 5a所示的参照 物定位方法时, 可以根据在先获取的用户面部图像及 RGB颜色查询对照表 ( http://www.1141a.com/other/rgb.htm, 下载网页的一个地址 )确定选择用户 面部图像 RGB彩图中的那个分量图进行处理; 该选择方式的过程, 本领域 的技术人员可以轻松实现, 不在进行说明。
需要说明的是, 图 5a所示的定位方法可以由其他的参照物定位方法实 现, 如空间坐标定位, 极坐标定位、 红外位置定位等。
面部动作的分析及控制方法:
先约定几个用户日常的操作动作: 屏幕内容移动, 通过上下左右轻微 转动头部完成, 例如, 需要看当前屏幕下一页的内容, 用户只需微微向下 低头, 做出试图看到屏幕下面 (视野之外) 的动作; 屏幕内容缩放, 通过 增大或减小用户面部与屏幕距离完成, 例如, 需要放大当前字体, 用户做 出微微靠近屏幕的动作; 确认操作, 用户点头; 取消操作, 用户摇头。 当 然, 本发明提供的显示控制终端也支持用户设置自定义动作, 如关闭等操 作;
下面以实施例说明本发明提供的显示控制技术; 在该实施例中做以下 假设: 参照物的坐标为三位空间坐标, 预设坐标为用户在正常阅读终端设 备当前显示内容时参照物的空间坐标移动范围,操作包括上下左右翻页(移 动)、 缩放、 确认取消 8个操作;
图 6a为本发明提供的显示控制方法在一个实施例的流程图。 由图 6a可知, 在一个实施例中, 本发明提供的显示控制方法包括以下 步骤:
S601 : 计算并记录预设坐标;
利用图 5a所示的参照物定位方法计算出用户在正常阅读终端设备当前 显示内容时参照物的空间坐标移动范围;
S602: 周期性获取用户的面部图像;
因为摄像设备都存在采样周期, 此处获取用户面部图像的周期默认为 摄像设备的采样周期;
S603: 计算参照物坐标;
利用图 5a所示的参照物定位方法计算出参照物在当前的面部图像中空 间坐标;
S604: 确定参照物的空间位置;
根据步骤 S603中计算得到参照物坐标确定参照物当前空间位置; S605: 计算预置时间段内中参照物空间位置的变化;
预置时间段大小为用户设置的点头或摇头动作的执行时间 T,当用户没 有设置预置时间段大小时, 该预置时间段大小为系统默认时间长短(统计 得到的人类点头或摇头的平均周期时间), 起算时刻为参照物的空间位置在 超出预设坐标的时刻;
例如该终端设备的采样周期是 t, 用户设置(或系统默认)的点头 /摇头 的执行时间为 T,则根据参照物的空间位置超出预设坐标范围的时刻起,记 录在 η个采样周期内参照物空间位置的变化, 并根据该进行控制显示, 其 中 n=T/t; 图 6b和图 6c为该空间位置变化的示例;
S606: 根据步骤 S605得到的变化, 执行操作;
操作包括上下左右翻页或移动、 缩放、 确认取消。
在上述实施例中, 步骤 S605得到参照物空间位置的变化可以是变化曲 线图; 现在假设预置时间段的时间长短是 6个采样周期; 在该时间段内, 4叚设参照物空间位置的变化曲线图分别如图 6b、 图 6c所示, 其中, 图 6b 所示的空间位置变化代表了摇头 (即取消操场), 图 6c 所示的空间位置变 化代表了向右翻页的操作; 在图 6b、 图 6c中, 1、 2、 3、 4、 5、 6六个数 字分别代表了在对应的用户面部图像中参照物的位置。
为了更直观说明在本发明中当显示控制装置检测到参照物的空间位置 时, 所代表的操作, 以图 6d说明, 在预置时间段内, 如果:
-照物一直在一区内运动, 则向上翻页;
-照物一直在二区内运动, 则向下翻页;
-照物一直在三区内运动, 则向左翻页;
-照物一直在四区内运动, 则向右翻页;
参照物在一、 零、 二区内来回运动, 则代表点头操作;
参照物在三、 零、 四区内来回运动, 则代表摇头操作;
在该实施例中, 仅给出了参照物在平行于显示面的平面上运动的情况, 在其他实施例中, 参照物也可以在垂直于显示面的平面上运动, 此时, 可 以采用三维坐标计算参照物的空间位置及空间位置的变化; 如 X轴代表左 右方向、 Y轴代表上下方向、 Z轴代表前后方向, 当参照物在 Z轴方向运 动时, Z轴坐标减小代表参照物靠近屏幕, 放大显示内容, 反之, 缩小显示 内容。
当然, 用户可以根据需要自定义各动作所代表的操作, 也可以自定义 操作动作等, 如视觉感知校正 (当用户第一次使用设备时, 需要分别注视 屏幕的四个顶角, 让摄像头记录下该用户参照物的空间坐标在屏幕范围内 的游离距离, 横向 /纵向最大坐标, 以及当前操作者目光锁定的屏幕位置, 便于后面的操作准确度)、屏幕内容锁定 (当分析出当前用户参照物坐标后, 需要在屏幕上出现一个内容锁定光标, 用以告知用户当前目光锁定的屏幕 位置, 如果操作者认为分析不准确, 可以进一步做视觉感知校正, 直到准 确感知目光锁定的屏幕位置)等。
当然该显示控制方法还存在其他的实现方式,例如, 图 7a所示的方法; 图 7a为本发明提供的显示控制方法在另一个实施例的流程图。
由图 7a可知, 在一个实施例中, 本发明提供的显示控制方法包括以下 步骤:
S701 : 计算并记录预设坐标; 和步骤 S601相同;
S702: 周期性获取用户的面部图像; 和步骤 S602相同;
S703 : 计算参照物坐标; 和步骤 S603相同;
S704: 确定参照物的运动矢量;
根据步骤 S603中计算得到参照物坐标减去预设坐标得到参照物的运动 矢量;
S705: 计算预置时间段内中参照物运动矢量的变化;
得到的参照物运动矢量的变化图如图 7b至图 7g所示;
S706: 根据步骤 S705得到的变化, 执行操作。
在上述实施例中,步骤 S705得到参照物运动矢量的变化是变化曲线图; 现在假设预置时间段的时间长短是 6个采样周期; 在该时间段内, 假设参 照物运动矢量的变化曲线图分别如图 7b至图 7g所示 (图中各箭头的大小 及方向代表了, 在六个面部图像中参照物与预设坐标相比发生的运动矢 量), 其中,
图 7b所示的运动矢量变化代表了仰视(即向上移动或翻页); 图 7c所示的运动矢量变化代表了右移 (即向右移动或翻页); 图 7d所示的运动矢量变化代表了俯视(即向下移动或翻页); 图 7e所示的运动矢量变化代表了左移 (即向左移动或翻页); 图 7f所示的运动矢量变化代表了摇头 (即取消或否定操作); 图 7g所示的运动矢量变化代表了点头 (即确认或肯定操作)。
在该实施例中, 仅给出了参照物在平行于显示面的平面上运动的情况, 在其他实施例中, 参照物也可以在垂直于显示面的平面上运动, 此时, 可 以采用三维坐标计算参照物的空间位置及空间位置的变化; 如 X轴代表左 右方向、 Y轴代表上下方向、 Z轴代表前后方向, 当参照物在 Z轴方向运 动时, Z轴坐标减小代表参照物靠近屏幕, 放大显示内容, 反之, 缩小显示 内容。
上述两个实施例只是两种获得参照物位置变化的较佳方法, 当然, 也 可以通过其他的方法获取参照物的位置变化, 如图像对比方法 (即: 拍摄 一样大小的图像之间, 进行重叠比较)等。
通过本发明的实施, 与现有技术相比存在以下进步:
首先, 该技术是基于用户面部图像进行显示控制的, 比现有技术中的 按键、 触屏、 鼠标甚至手势控制技术都方便用户的使用, 彻底解放了用户 的双手;
其次, 该技术是基于用户面部中参照物的相对坐标进行控制的, 用户 可以根据实际需要设置该参照物, 如任一个瞳孔、 鼻尖、 甚至在面部做标 记点都可以, 给予了用户的多样的个性化选择的可能;
再次, 该技术工作原理简单, 仅仅根据参照物空间位置或运算矢量的 变化就实现了对终端显示的控制, 对终端设备的硬件要求低, 使该技术可 以更广阔的应用到日常生活;
最后, 该技术可以根据阅读时用户瞳孔位置的变化就可以实现控制, 方便快捷;
综上, 通过本发明的实施, 终端用户不需要通过键盘、 鼠标或触摸屏 仅通过面部动作就可以实现对终端显示内容的控制, 增加了用户的使用体 以上仅是本发明的具体实施方式而已, 并非对本发明做任何形式上的 等同变化或修饰, 均仍属于本发明技术方案的保护范围。 工业实用性
本发明提供了一种显示控制方法、 装置及终端, 其中, 所述显示控制 装置通过感应装置周期性获取用户的面部图像, 根据面部图像计算参照物 坐标, 并根据参照物坐标与预设值控制显示装置的显示。 通过本发明的实 施, 用户不需要通过键盘、 鼠标或触摸屏仅通过面部动作就可以实现对终 端显示内容的控制, 解放了用户的双手, 增加了用户的使用体验。

Claims

权利要求书
1、 一种显示控制方法, 包括:
周期性获取用户的面部图像;
根据所述面部图像计算参照物坐标;
处理所述参照物坐标与预设坐标, 执行操作。
2、 如权利要求 1所述的显示控制方法, 其中, 所述参照物为所述面部 图像中的任一点或多点; 所述预设坐标为用户在正常阅读当前显示内容时, 所述参照物的空间坐标。
3、 如权利要求 2所述的显示控制方法, 其中, 处理所述参照物坐标与 预设坐标执行操作, 包括以下步骤:
根据所述参照物坐标与预设坐标, 计算参照物的运动矢量;
根据在预置时间段内中参照物运动矢量的变化, 执行操作。
4、 如权利要求 2所述的显示控制方法, 其中, 处理所述参照物坐标与 预设坐标执行操作, 包括以下步骤:
根据所述参照物坐标与预设值, 确定参照物的空间位置;
根据在预置时间段内中参照物空间位置的变化, 执行操作。
5、 如权利要求 1至 4任一项所述的显示控制方法, 其中, 当所述参照 物为所述面部图像中用户瞳孔中心点中的一个或两个时, 才艮据所述面部图 像计算参照物坐标的操作, 包括以下步骤:
获取面部图像的 RGB分量图像, 选择红色分量图像;
将所述红色分量图像中各点的分量值与 255做差, 得到所述红色分量 图像的反色图;
分别对所述反色图进行 X轴和 Y轴方向的累加和运算,获取确定 X轴 和 Y轴方向的峰值坐标;
根据所述峰值坐标, 确定参照物的坐标。
6、 一种显示控制装置, 包括获取模块、 处理模块和执行模块; 所述获取模块, 配置为周期性获取用户的面部图像;
所述处理模块, 配置为根据所述面部图像计算获取参照物坐标, 并与 预设坐标进行比较处理, 并向所述执行模块输出处理结果;
所述执行模块, 配置为根据所述处理结果执行操作。
7、 如权利要求 6所述的显示控制装置, 其中, 所述参照物为所述面部 图像中的任一点或多点; 所述预设坐标为用户在正常阅读当前显示内容时, 所述参照物的空间坐标。
8、 如权利要求 7所述的显示控制装置, 其中, 所述处理模块包括第一 处理单元;
所述第一处理单元, 配置为根据所述参照物坐标与预设坐标, 计算参 照物的运动矢量, 并根据在预置时间段内中, 参照物运动矢量的变化, 输 出处理结果。
9、 如权利要求 7所述的显示控制装置, 其中, 所述处理模块包括第二 处理单元;
所述第二处理单元, 配置为根据所述参照物坐标与预设坐标, 计算参 照物的空间位置, 并根据在预置时间段内中, 参照物空间位置的变化, 输 出处理结果。
10、 如权利要求 6至 9任一项所述的显示控制装置, 其中, 所述处理 模块包括计算单元;
所述计算单元, 配置为当所述参照物为所述面部图像中用户瞳孔中心 点中的一个或两个时,获取面部图像的 RGB分量图像,选择红色分量图像, 将所述红色分量图像中各点的分量值与 255做差, 得到所述红色分量图像 的反色图, 分别对所述反色图进行 X轴和 Y轴方向的累加和运算, 获取确 定 X轴和 Y轴方向的峰值坐标,根据所述峰值坐标,确定所述参照物坐标。
11、 一种显示控制终端, 包括感应装置和显示装置, 还包括如权利要 求 6至 10任一项所述的显示控制装置; 所述显示控制装置通过所述感应装 置周期性获取用户的面部图像, 根据面部图像计算参照物坐标, 并根据参 照物坐标与预设值控制所述显示装置的内容显示。
PCT/CN2013/077509 2012-08-24 2013-06-19 一种显示控制方法、装置及终端 WO2014029229A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP13831638.5A EP2879020B1 (en) 2012-08-24 2013-06-19 Display control method, apparatus, and terminal
US14/421,067 US20150192990A1 (en) 2012-08-24 2013-06-19 Display control method, apparatus, and terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210305149.1A CN102880290B (zh) 2012-08-24 2012-08-24 一种显示控制方法、装置及终端
CN201210305149.1 2012-08-24

Publications (1)

Publication Number Publication Date
WO2014029229A1 true WO2014029229A1 (zh) 2014-02-27

Family

ID=47481652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/077509 WO2014029229A1 (zh) 2012-08-24 2013-06-19 一种显示控制方法、装置及终端

Country Status (4)

Country Link
US (1) US20150192990A1 (zh)
EP (1) EP2879020B1 (zh)
CN (1) CN102880290B (zh)
WO (1) WO2014029229A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046533A (zh) * 2018-01-15 2019-07-23 上海聚虹光电科技有限公司 用于生物特征识别的活体检测方法

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880290B (zh) * 2012-08-24 2016-06-22 中兴通讯股份有限公司 一种显示控制方法、装置及终端
CN103116403A (zh) * 2013-02-16 2013-05-22 广东欧珀移动通信有限公司 一种屏幕切换方法及移动智能终端
CN103279253A (zh) * 2013-05-23 2013-09-04 广东欧珀移动通信有限公司 一种主题设置方法及终端设备
CN103885579B (zh) * 2013-09-27 2017-02-15 刘翔 终端显示方法
CN104866506B (zh) * 2014-02-25 2019-07-09 腾讯科技(深圳)有限公司 一种播放动画的方法及装置
US9529428B1 (en) * 2014-03-28 2016-12-27 Amazon Technologies, Inc. Using head movement to adjust focus on content of a display
CN105573608A (zh) * 2014-10-11 2016-05-11 乐视致新电子科技(天津)有限公司 一种对人机交互中的操作状态进行显示的方法及装置
CN105159451B (zh) * 2015-08-26 2018-05-22 北京京东尚科信息技术有限公司 一种数字阅读的翻页方法和装置
CN107067424B (zh) * 2017-04-18 2019-07-12 北京动视科技有限公司 一种击球影像生成方法及系统
CN108171155A (zh) * 2017-12-26 2018-06-15 上海展扬通信技术有限公司 一种图像缩放方法及终端
CN108170282A (zh) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 用于控制三维场景的方法和装置
CN112596605A (zh) * 2020-12-14 2021-04-02 清华大学 一种ar眼镜控制方法、装置、ar眼镜及存储介质
CN113515190A (zh) * 2021-05-06 2021-10-19 广东魅视科技股份有限公司 一种基于人体手势的鼠标功能实现方法
CN115793845B (zh) * 2022-10-10 2023-08-08 北京城建集团有限责任公司 一种基于全息影像的智慧展厅系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102012742A (zh) * 2010-11-24 2011-04-13 广东威创视讯科技股份有限公司 一种眼睛鼠标的校正方法及其装置
CN102081503A (zh) * 2011-01-25 2011-06-01 汉王科技股份有限公司 基于视线追踪自动翻页的电子阅读器及其方法
CN102880290A (zh) * 2012-08-24 2013-01-16 中兴通讯股份有限公司 一种显示控制方法、装置及终端

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6419638B1 (en) * 1993-07-20 2002-07-16 Sam H. Hay Optical recognition methods for locating eyes
US6134339A (en) * 1998-09-17 2000-10-17 Eastman Kodak Company Method and apparatus for determining the position of eyes and for correcting eye-defects in a captured frame
US6925122B2 (en) * 2002-07-25 2005-08-02 National Research Council Method for video-based nose location tracking and hands-free computer input devices based thereon
CN1293446C (zh) * 2005-06-02 2007-01-03 北京中星微电子有限公司 一种非接触式目控操作系统和方法
CN101576800A (zh) * 2008-05-06 2009-11-11 纬创资通股份有限公司 驱动电子装置显示页面卷动的方法与装置
CN102116606B (zh) * 2009-12-30 2012-04-25 重庆工商大学 以一维三基色峰谷为特征测量轴向位移的方法及装置
JP5387557B2 (ja) * 2010-12-27 2014-01-15 カシオ計算機株式会社 情報処理装置及び方法、並びにプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102012742A (zh) * 2010-11-24 2011-04-13 广东威创视讯科技股份有限公司 一种眼睛鼠标的校正方法及其装置
CN102081503A (zh) * 2011-01-25 2011-06-01 汉王科技股份有限公司 基于视线追踪自动翻页的电子阅读器及其方法
CN102880290A (zh) * 2012-08-24 2013-01-16 中兴通讯股份有限公司 一种显示控制方法、装置及终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046533A (zh) * 2018-01-15 2019-07-23 上海聚虹光电科技有限公司 用于生物特征识别的活体检测方法

Also Published As

Publication number Publication date
EP2879020B1 (en) 2018-11-14
CN102880290A (zh) 2013-01-16
US20150192990A1 (en) 2015-07-09
CN102880290B (zh) 2016-06-22
EP2879020A1 (en) 2015-06-03
EP2879020A4 (en) 2015-08-19

Similar Documents

Publication Publication Date Title
WO2014029229A1 (zh) 一种显示控制方法、装置及终端
Khamis et al. The past, present, and future of gaze-enabled handheld mobile devices: Survey and lessons learned
US20150020032A1 (en) Three-Dimensional Display-Based Cursor Operation Method and Mobile Terminal
US20150362998A1 (en) Motion control for managing content
US20150220158A1 (en) Methods and Apparatus for Mapping of Arbitrary Human Motion Within an Arbitrary Space Bounded by a User's Range of Motion
KR101196291B1 (ko) 손가락의 움직임을 인식하여 3d인터페이스를 제공하는 단말기 및 그 방법
EP3349095B1 (en) Method, device, and terminal for displaying panoramic visual content
CN107172347B (zh) 一种拍照方法及终端
WO2022174594A1 (zh) 基于多相机的裸手追踪显示方法、装置及系统
CN111527468A (zh) 一种隔空交互方法、装置和设备
KR102294599B1 (ko) 디스플레이 디바이스 및 그 제어 방법
US9377866B1 (en) Depth-based position mapping
US10444831B2 (en) User-input apparatus, method and program for user-input
WO2015067023A1 (zh) 视频会议体感控制方法、终端及系统
Kim et al. Oddeyecam: A sensing technique for body-centric peephole interaction using wfov rgb and nfov depth cameras
KR20160055407A (ko) 홀로그래피 터치 방법 및 프로젝터 터치 방법
Colaço Sensor design and interaction techniques for gestural input to smart glasses and mobile devices
CN108369477B (zh) 信息处理装置、信息处理方法和程序
JP2020149336A (ja) 情報処理装置、表示制御方法、及びプログラム
Morita et al. Head orientation control of projection area for projected virtual hand interface on wheelchair
KR20180044535A (ko) 홀로그래피 스마트홈 시스템 및 제어방법
Islam et al. Developing a novel hands-free interaction technique based on nose and teeth movements for using mobile devices
CN110543274B (zh) 一种图像显示方法、移动终端以及具有存储功能的装置
KR20150137908A (ko) 홀로그래피 터치 방법 및 프로젝터 터치 방법
JP2014149439A (ja) 表示装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13831638

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14421067

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2013831638

Country of ref document: EP