WO2016065706A1 - 智能终端的控制方法、装置和计算机存储介质 - Google Patents

智能终端的控制方法、装置和计算机存储介质 Download PDF

Info

Publication number
WO2016065706A1
WO2016065706A1 PCT/CN2014/094137 CN2014094137W WO2016065706A1 WO 2016065706 A1 WO2016065706 A1 WO 2016065706A1 CN 2014094137 W CN2014094137 W CN 2014094137W WO 2016065706 A1 WO2016065706 A1 WO 2016065706A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
user
line
positioning
screen
Prior art date
Application number
PCT/CN2014/094137
Other languages
English (en)
French (fr)
Inventor
王新
罗伟
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016065706A1 publication Critical patent/WO2016065706A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to the field of communications, and in particular, to a method, an apparatus, and a computer storage medium for controlling a smart terminal.
  • the embodiment of the present invention is to provide a method, a device, and a computer storage medium for controlling an intelligent terminal, which can solve the problem that the existing operation of the smart terminal by hand is inaccurate, inconvenient to operate and cumbersome to operate.
  • a first aspect of the embodiments of the present invention provides a method for controlling an intelligent terminal, including:
  • the acquiring location information of the user's eyes includes acquiring location information of the user's eyes; and determining, according to the location information, that the user's eyes correspond to the screen.
  • Locations include:
  • the preset position of the line connecting the two projection points is determined as the position of the user's eyes corresponding to the screen.
  • the preset position of the line connecting the two projection points includes: an intermediate position between the two projection points, or a connection between the two projection points
  • the line is one-third of the position of one of the projection points.
  • the acquiring location information of the user's eyes includes: using one of the eyes of the user as a positioning eye, and acquiring location information of the positioning eye;
  • Determining, according to the location information, a location of the user's eyes corresponding to the screen includes:
  • the position where the intersection point P of the positioning line and the smart terminal screen is located is the position corresponding to the positioning eye on the screen.
  • the determining, according to the connection K, the positioning line that the positioning eye projects onto the screen of the smart terminal comprises:
  • a connection between the camera and the positioning eye is a line K between a center point of the camera and a center point of the positioning eye;
  • a straight line N is formed between the line K and the perpendicular line M as the positioning a line whose angle with the line K is a preset angle ⁇ .
  • calculating the intersection of the positioning line and the screen of the smart terminal Point P includes:
  • the position of the intersection point P is calculated based on the distance a and the angle ⁇ .
  • the receiving the control instruction for controlling the control object comprises:
  • the eye movement includes at least one of a time when the eye gaze at the screen, a rotation of the eyeball, a time of closing the eye of the eye, a frequency of blinking, and a translation of the eyeball;
  • the correspondence with the control command includes at least one of the following:
  • the eye action corresponds to a "determine" control command
  • the eye action corresponds to a "next page” control command in the page with the page turning, and a “slide down” control command in the page with the slider;
  • the eye action corresponds to a picture or a link collection that saves the position
  • the eye action corresponds to deleting the file or other file of the location
  • the eye action corresponds to automatically opening the search function, or starting the voice search function
  • the eye action corresponds to automatically entering the call interface, and the voice call function is activated
  • the eye movement corresponds to a "next page” control command in the page with the page turning, and a “slide down” control command in the page with the slider;
  • the eye movement corresponds to "previous page” in the page with the page turning, and “sliding upward” in the page with the slider.
  • the second aspect of the embodiments of the present invention further provides a control device for an intelligent terminal, including an acquiring module, a determining module, an object module, a receiving module, and an executing module:
  • the acquiring module is configured to acquire location information of a user's eyes
  • the determining module is configured to determine, according to the location information, a location of the user's eyes corresponding to the screen;
  • the control object module is configured to determine an object at the location as a control object
  • the receiving module is configured to receive a control instruction for controlling the control object
  • the execution module is configured to perform a corresponding control operation on the control object according to the control instruction.
  • the acquiring module is further configured to acquire a bit of a user's eyes.
  • the setting information includes acquiring location information of the eyes of the user; the determining module is further configured to:
  • the preset position of the line connecting the two projection points is determined as the position of the user's eyes corresponding to the screen.
  • the acquiring module is further configured to use one of the eyes of the user as a positioning eye to acquire location information of the positioning eye; the determining module is further configured to:
  • the position where the intersection point P of the positioning line and the smart terminal screen is located is the position corresponding to the positioning eye on the screen.
  • the determining module is further configured to:
  • a connection between the camera and the positioning eye is a line K between a center point of the camera and a center point of the positioning eye;
  • a straight line N is formed between the line K and the perpendicular line M as the positioning a line whose angle with the line K is a preset angle ⁇ .
  • the determining module is further configured to:
  • the position of the intersection point P is calculated based on the distance a and the angle ⁇ .
  • the receiving module includes an eye receiving submodule and a voice receiving submodule, including:
  • the eye receiving sub-module is configured to acquire an eye motion of the user, compare the eye motion with a preset relationship between the eye motion and the control command, and determine a control instruction corresponding to the eye motion;
  • the voice receiving sub-module is configured to acquire voice information of the user, compare the voice information with a preset voice information and a control command correspondence, and determine a control instruction corresponding to the voice information.
  • the eye movement includes at least one of a time when the eye gaze at the screen, a rotation of the eyeball, a time of closing the eye of the eye, a frequency of blinking, and a translation of the eyeball;
  • the correspondence with the control command includes at least one of the following:
  • the eye action corresponds to a "determine" control command
  • the eye action corresponds to a "next page” control command in the page with the page turning, and a “slide down” control command in the page with the slider;
  • the eye movement corresponds to "previous page” in the page with the page turning, and correspondingly “slides upward” in the page with the slider;
  • the eye action corresponds to a picture or a link collection that saves the position
  • the eye action corresponds to deleting the position.
  • Documents or other documents
  • the eye action corresponds to automatically opening the search function, or starting the voice search function
  • the eye action corresponds to automatically entering the call interface, and the voice call function is activated
  • the eye movement corresponds to a "next page” control command in the page with the page turning, and a “slide down” control command in the page with the slider;
  • the eye movement corresponds to "previous page” in the page with the page turning, and “sliding upward” in the page with the slider.
  • a third aspect of the embodiments of the present invention provides a computer storage medium, where the computer storage medium stores computer executable instructions for performing at least one of the methods of the first aspect of the embodiments of the present invention. one.
  • the method and device for controlling the intelligent terminal and the computer storage medium provided by the embodiment of the present invention first acquire the position information of the user's eyes, determine the position of the user's eyes corresponding to the screen according to the position information, and determine the object at the position as the control object. And then receiving a control instruction for controlling the control object, and performing a corresponding control operation on the control object according to the control instruction.
  • the selection control object can be located by the user's eye position information, so that the hands can be freed, and the inaccuracy of selecting the control object by hand in the existing one is avoided, and in some scenes, it is inconvenient. Operate by hand, such as the presence of water or stains on your hands, and the cumbersome problems of operating the screen with both hands.
  • the eye movement can also be used to relieve eye fatigue and enhance the user experience.
  • FIG. 1 is a schematic flowchart of a method for controlling an intelligent terminal according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic flowchart of a method for controlling an intelligent terminal according to Embodiment 2 of the present invention
  • FIG. 3 is a schematic structural diagram of an entire intelligent terminal involved in a method for controlling an intelligent terminal according to Embodiment 2 of the present invention
  • FIG. 4 is a schematic perspective view of a binocular positioning principle according to Embodiment 2 of the present invention.
  • FIG. 5 is a schematic structural diagram of a first apparatus for controlling a smart terminal according to Embodiment 3 of the present invention.
  • FIG. 6 is a schematic structural diagram of a second embodiment of a control apparatus for an intelligent terminal according to Embodiment 3 of the present invention.
  • the present application is based on an analog binocular positioning algorithm.
  • the specific image capturing device such as a camera, collects the motion of the user's eyes and the user's eyes are focused on the smart terminal.
  • the simulation algorithm can calculate the position of the user's eyes corresponding to the screen. If the image capturing device is integrated on the terminal, when the user is using the terminal, the image collecting device collects the position information of the user's eyes, and calculates the position of the user's eye correspondingly positioned on the screen, so as to achieve the effect of simulating the finger positioning, and receiving the user.
  • the control command is operated, and the control command can determine the execution mode by the action of the eye, thereby achieving the operation instead of the finger and liberating the hands.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the control method of the intelligent terminal in this embodiment includes the following steps:
  • Step S101 Acquire user's eye position information
  • the smart terminal acquires the eye position information of the user, and the image data of the user's eyes is acquired by the image capturing device, and the position information of the user's eyes is parsed through the image data.
  • the position information of the user's eyes can also be directly obtained through the sensing device.
  • the location information of the eyes of the specific user includes the distance of the user's eyes from the screen of the smart terminal, the coordinate information of the eyes, the angle between the eyes and the smart terminal, and the like.
  • the smart terminal here may include a mobile phone, a tablet, a computer, and the like.
  • the user's eye position information is obtained, which may be obtaining the position information of the user's eyes, or acquiring the position information of one of the eyes of the user, and the specific acquisition situation is determined according to the specific situation.
  • Step S102 determining, according to the location information, a location of the user's eyes corresponding to the screen;
  • the eyes of the person can play the role of locating the position of the object. If the line of sight of each eye can be considered as a straight line, the line of sight of the two eyes must meet at a certain position on the screen.
  • the location of the user's eyes on the screen of the smart terminal can be determined according to the relationship between the location of the user's eyes and the smart terminal, that is, the position of the user's eyes staring at the screen, and the location is used as the user.
  • the eye corresponds to the position on the screen.
  • the position of the eyes on the screen generally changes with the position of the novel text.
  • the line of sight of the user's eyes can be obtained by directly passing through the image acquisition device of the smart terminal or at least one camera of the smart device, and can be positioned at the position of the screen of the smart terminal according to the line of sight of the user's eyes. It is also possible to obtain the position information of the eye, and obtain the position of the user's eye line of sight on the screen of the smart terminal according to the position information of the eye. Of course, other ways can be taken to obtain the position of the user's eyes in the position of the smart terminal screen. It is worth noting that the line of sight here refers to the user's eyes to the smart terminal screen as a virtual line.
  • Step S103 determining an object at the location as a control object
  • the user's eyes correspond to the position of the smart terminal screen, and it is determined whether there is an object at the position.
  • the object may be a function icon, a picture file, etc., and should be understood as an object that can be operated. If there is a function icon, it is judged whether or not the function icon is to be operated. For example, if the eyes are positioned at the position of the screen, there is a "next page” function chart. Mark, then determine "next page” as the control object. If you want to go to the next page, link to the next page. It should be understood that it is equivalent to touching the function icon or link once by hand. If it is a picture, it can be an enlargement or reduction of the picture, as well as deletion, saving, and the like. If it is a file, it can be open, copy, delete, and so on.
  • Step S104 Receive a control instruction of a user control control object
  • the manner of triggering the control command may include a voice control mode, a manual control mode, and an eye control mode. It should be understood that various existing methods of triggering control commands can be used to trigger the control commands of the present application.
  • Step S105 Perform a corresponding control operation on the control object according to the control instruction.
  • step S103 a corresponding control operation is performed on the control object of step S103 in accordance with the control command triggered in step S104.
  • step S103 "next page” is selected, and in step S104, a control command for determining execution of "next page” is triggered. Then you can trigger the "next page” user to say “next page” to open the next page; for example, some mobile phones have upper and lower hardware buttons, and you can also open the next link through the upper and lower hardware buttons. page.
  • acquiring location information of the user's eyes includes acquiring location information of the eyes of the user, and then determining, according to the binocular position information, that the location of the user's eyes corresponding to the screen may be determined according to the binocular position information.
  • the preset position of the line connecting the two projection points includes: the middle position of the line between the two projection points, or the line connecting the two projection points to a position one third of the projection point.
  • other preset positions are not only the case, but can be set according to specific conditions.
  • acquiring the location information of the user's eyes further includes acquiring location information of one of the eyes of the user's eyes, and then determining, according to the location information, that the location of the user's eyes corresponding to the screen may be: one of the eyes of the user The eye is used as a positioning eye; the connection K between the camera on the front of the screen of the smart terminal and the positioning eye is obtained; the positioning is determined according to the connection K The eye is projected onto the positioning line on the screen of the smart terminal; the position where the intersection point P of the positioning line and the smart terminal screen is located is the position corresponding to the positioning eye on the screen.
  • determining the positioning line of the positioning eye projected onto the screen of the smart terminal may specifically set the center point of the camera as the origin, and the connection between the camera and the positioning eye is between the center point of the camera and the center point of the positioning eye.
  • Connection K determining the vertical line M of the positioning eye perpendicular to the plane of the smart terminal screen; the line K between the center point of the camera and the center point of the positioning eye and the vertical line M of the positioning eye perpendicular to the screen of the smart terminal
  • a line N is formed as a positioning line between the line K between the center point of the camera and the center point of the positioning eye and the perpendicular line M of the positioning eye perpendicular to the screen of the smart terminal.
  • the angle between the line and the line K between the center point of the camera and the center point of the positioning eye is a preset angle ⁇ . Wherein, the value of ⁇ is smaller than the angle between the line K and the perpendicular line M.
  • intersection point P of the specific calculation positioning line and the smart terminal screen may be calculated as the distance L from the positioning eye to the origin according to the position information of the positioning eye; and the length of the vertical line M of the positioning eye perpendicular to the screen of the smart terminal is calculated according to the position information of the positioning eye.
  • an angle ⁇ between the positioning line and the perpendicular line M of the positioning eye perpendicular to the screen of the smart terminal according to the length of the perpendicular line M of the smart terminal screen and the perpendicular line between the positioning line and the positioning eye perpendicular to the screen of the smart terminal.
  • the angle ⁇ of the angle is calculated to obtain the length S of the positioning line; the distance L between the positioning eye to the origin, the angle ⁇ between the positioning line and the center point of the camera and the center point of the positioning eye, and the length S of the positioning line are calculated.
  • the distance from the intersection point P of the positioning line to the screen of the smart terminal to the origin a; the intersection point of the acquisition positioning line on the screen of the intelligent terminal is the angle ⁇ between the line connecting P and the origin with the y axis of the origin; according to the intersection of the positioning line on the screen of the intelligent terminal
  • the distance from the point P to the origin and the intersection point of the positioning line on the screen of the intelligent terminal are the angle between the line connecting P and the origin and the angle y of the origin y axis.
  • the value of the specific angle ⁇ can be set according to a specific experience value, and is not limited to determining the position of the positioning line according to the above manner and calculating the position of the intersection line P of the positioning line on the screen of the smart terminal, and other manners can also be implemented.
  • the priority trigger control instruction may include acquiring the user's The eye movement compares the eye movement with the preset eye motion and the control command correspondence, determines the control command corresponding to the eye motion, and triggers the control command corresponding to the eye motion.
  • the eye movement information is acquired according to the change of the eye position while acquiring the position information of the eyes, so that the action of the eye can be quickly obtained, and the corresponding relationship between the eye movement and the control instruction set in advance is compared and determined.
  • the control command corresponding to the eye movement triggers the control command.
  • the user can also obtain the voice information of the user, compare the voice information with the preset voice information and the control command correspondence, determine the control command corresponding to the voice information, and trigger the control instruction corresponding to the voice information.
  • the specific embodiment herein can adopt the existing voice control operation mode.
  • the eye motion in the present embodiment may include at least one of a time when the eye gaze at the screen, a rotation of the eyeball, a time when the eye is closed, a frequency of blinking, and a translation of the eyeball. It should be understood that the matching of the action of the eye with the control command is set according to the user's own preferences.
  • the eye action corresponds to a “determine” control instruction; when the user's eyeball rotates clockwise, the eye action is within a page with a page turning Corresponding to the "next page” control command, corresponding to the "slide down” control command in the page with the slider; when the user's eyeball is rotated counterclockwise, the eye action corresponds to "previous” in the page with the page turning “Page", corresponding to "slide up” in the page with the slider; when the user's left eye closes the eye for more than the second preset time, the eye action corresponds to the picture or link collection where the position is saved; when the user closes the right eye When the eye exceeds the third preset time, the eye action corresponds to deleting the file or other file of the position; when the eyes closed for more than the fourth preset time, the eye action corresponds to automatically opening the search function, or starting the voice search function.
  • the eye action corresponds to automatically entering the call interface, and the voice call function is activated; when the user's right eye is continuous Eye exceeds a second predetermined frequency, the corresponding automatic operation of the eye Answer the call and turn on the handsfree feature.
  • the eye movement corresponds to the "next page” control command in the page with the page turning, and the “slide down” control command in the page with the slider; when the user's eyeball is to the right
  • panning the eye movement corresponds to "previous page” in the page with the page turning, and “sliding up” in the page with the slider.
  • the first preset time, the second preset time, the third preset time, the third preset time, the first preset frequency, and the second preset frequency may all be corresponding according to the user's preference.
  • the setting is not limited to the specific value in the embodiment of the present application, and the corresponding relationship between the eye movement and the control cooling can also be set correspondingly according to the user's preference. For example, when the user gaze at a certain link or icon in the screen for more than 2 seconds, the graphics processing unit can parse the action into a "confirm" command, and then send the command and the coordinates of the eye focus on the screen to the terminal for processing.
  • the processor After the processor accepts this instruction, it will operate the icon linking the coordinates, which is equivalent to touching the icon or linking once by hand.
  • This eye movement can also be pre-defined, and can be collected by the graphic processing unit, and then input corresponding control commands to complete the analysis of the command.
  • the eye movement and the parsed command include the following range: when the user gaze at a certain connection or icon of the screen for more than 2 seconds, the user's action is interpreted as a "confirm" command; when the user's eyes rotate clockwise In the page with page turning, it can be parsed as "next page", and it can be parsed as sliding down in the page with the slider; when the user's eyes rotate counterclockwise, it will be parsed as "previous" in the page with page turning.
  • Page parsed as "slide up” in the page with the slider; when the user's left eye closes the eye for more than 2 seconds, it resolves to save the picture or link collection of the position; when the user closes the eye for more than 2 seconds, Delete files or other files at this location; when the eyes are closed for more than 2 seconds, the search function is automatically turned on, or the voice search function is activated; when the user's left eye blinks more than 2 times, the call interface is automatically entered and the voice call function is activated.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the method for controlling the intelligent terminal obtains the position information of the eye through the image acquisition device.
  • the image acquisition device can be a camera, and the position of the user's eye corresponding to the screen is determined by the simulation calculation module. As shown in Figure 2, the following steps are included:
  • Step S201 the process starts
  • Step S202 starting the image collection device, monitoring the position of the user's binocular positioning, and the eye movement, calculating, by the simulation calculation module, the position of the user positioning corresponding to the coordinates on the screen, and parsing the instruction defined by the eye movement;
  • the plane of the terminal is set to a coordinate map, and the center point of the image capturing device 302 is taken as the origin G (0, 0) of the plane, and of course, other points may be selected as the origin, so
  • Each pixel of the screen 303 of the terminal has its own coordinates, which are contained in the entire coordinate system.
  • 301 in FIG. 3 is a receiver receiver of the mobile phone, and 304 is a touch button.
  • Figure 4 shows the basic principle of the scheme.
  • the image acquisition device monitors the changes of the user's eyes in real time, and simultaneously transmits the image to the analog computing unit.
  • the real-time simulation calculates the user's binocular focus.
  • the position P on the screen and the coordinates of P are simultaneously transmitted to the CPU.
  • the image acquisition device 302 collects the action of the user's eyes in real time, such as a certain position P(x, y, 0) of the gaze screen exceeds 1S.
  • the analog computing unit will parse the eye action into a preset command, feed back to the CPU, and the CPU will execute the corresponding command.
  • the position P point coordinate of the user's binocular focus on the screen is calculated as follows, with the positioning position of the user's right eye 402 as a reference, that is, the right eye 402 is used as the positioning eye.
  • the distance L of the right eye from the image acquisition device and the distance S of the right eye distance to the focus P can be calculated.
  • the starting point of a point on the edge of the eye and of course the center point of the eye can be used as a starting point.
  • the image acquisition device determines the position of the user's eyes.
  • the method is as follows: If the right eye is used as the distance measurement reference, the terminal will prompt the user to blink the eye twice to determine whether it is the right eye. Virtually out the line of the image acquisition device and the right eye, this line is virtual for the algorithm description, when the user's eyes move, It is equivalent to a straight line that is constantly changing.
  • the angle ⁇ between the virtual line and the selected y-axis of the image acquisition device, and the angle ⁇ between L and S can be calculated by the image acquisition device. All the algorithms are based on the premise described above: through the image acquisition device, on the basis of the screen of the terminal as the horizontal plane, the tendency of the focus of both eyes can be simulated, that is, the image acquisition device can be the direction of the focus of the eyes.
  • the lines of the two eyes must meet at a certain position on the screen.
  • the following algorithm is an algorithm that illustrates the point at which the eyes merge on the terminal screen.
  • one of the eyes must be used as the reference, that is, the right eye shown above is used as the reference point.
  • the virtual straight line between the position of the right eye 402 and the terminal screen is ⁇ and P, which can be regarded as the right line of sight of the person, and the virtual line L between the collecting device and the human eye.
  • the angle is ⁇ , the angle can be collected by the image acquisition device, and the angle ⁇ between the 402 and the y-axis can also be calculated by the image acquisition device.
  • the distance R between the 402 and the origin G can be calculated by the infrared ranging function of the image acquisition device, and the distance from the origin of the P point can be calculated by the angle ⁇ :
  • the P point coordinates are ([L2+(z2/cos ⁇ )2-2L(z2/cos ⁇ )*cos ⁇ ]*sin ⁇ , [L2+(z2/cos ⁇ )2-2L(z2/cos ⁇ )*cos ⁇ ]* Cos ⁇ , 0), the user's eyes
  • the coordinates of the focus can be calculated by this formula and returned to the CPU for processing.
  • Step S203 determining whether the action of the user's eye is effective. If the user is only blinking normally, the step is filtered out, and the process automatically returns to step S202. When the user's eye action is consistent with the preset action, step S403 will be performed. Instruct to proceed to the next step;
  • Step S204 is a process in which the simulation calculation module analyzes the eye movement. If the user's eye movement and the preset motion match successfully, the simulation calculation module simultaneously calculates the coordinates of the position of the user's focus on the screen, and sends it to the CPU together. Process
  • Step S205 is that the CPU receives the instruction sent by the image processing unit and the coordinate value. After the analysis is completed, the icon of the coordinate system is confirmed, that is, the user completes the operation with the finger.
  • the image collecting device transmits the motion to the analog computing unit for processing, parses the command as a confirmation, and sends the command to the CPU, and the CPU links the CPU.
  • the address Therefore, the action of the eye replaces a series of actions by hand to make the user release the hands, and the user experience is greatly improved. Similar eye movements can be preset a lot, such as closing the right eye, parsing to slide to the right; closing the left eye, parsing to slide to the left; turning the eyes clockwise, parsing into page turning, and so on.
  • the user can replace the operation of both hands through the action of both eyes while reading, and can continuously relieve eye fatigue through the activity of the eye, and also has the function of protecting the eyes of the user.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • the embodiment of the present invention further provides an apparatus for controlling an intelligent terminal.
  • the control apparatus includes an acquisition module, a positioning module, an object module, a receiving module, and an execution module.
  • the acquiring module is configured to acquire location information of a user's eyes.
  • the determining module is configured to determine, according to the location information, a location of a user's eye corresponding to a screen;
  • the control object module is configured to determine an object at the location as a control object;
  • the receiving module is configured to receive for receiving Controlling a control instruction of the control object;
  • the execution module is configured to perform the control object according to the control instruction The corresponding control operations are performed.
  • the specific interface of the acquisition module may include a specific structure user such as a camera or a camera to determine the user location information, and the acquisition module may also be a specific structure such as a pupil positioning structure or an eyeball tracking structure or an eye tracker in the prior art.
  • the specific structure of the positioning module, the object module, and the execution module may include a processor having information processing; the processor may perform a specified function of the module by executing a preset instruction.
  • the processor may be a combination of an electronic component or an electronic component having an information processing function such as an application processor AP, a central processing unit CPU, a microprocessor MCU, a digital signal processor DSP, or a programmable array PLC.
  • the receiving module may be a communication interface inside the device, such as various types of buses; the bus is connected to an execution module and a control object module for transmitting control commands.
  • the acquiring module is further configured to acquire location information of the user's eyes, including acquiring location information of the eyes of the user; the determining module is further configured to: determine, according to the binocular location information, that the binoculars are vertically projected on the smart terminal screen Two projection points of the plane; the preset position of the connection between the two projection points is determined as the position of the user's eyes corresponding to the screen.
  • the preset position of the connection between the two projection points includes: an intermediate position of the line between the two projection points, or a line connecting the two projection points by one third of one of the projection points Location.
  • the acquiring module is further configured to acquire location information of the user's eyes, including acquiring location information of one of the eyes of the user's eyes; the determining module is further configured to: use one of the eyes of the user as a positioning eye; a connection K between the camera on the front side of the smart terminal screen and the positioning eye; determining, according to the connection K, a positioning line projected by the positioning eye onto the screen of the smart terminal; and the positioning line and the smart The position of the intersection point P of the terminal screen is the position corresponding to the positioning eye on the screen.
  • the determining module is further configured to: set a center point of the camera as an origin, and a connection between the camera and the positioning eye is the camera center point and the positioning eye a line K between the center points of the eye; determining a perpendicular line M of the positioning eye perpendicular to the plane of the smart terminal screen; a line K between the center point of the camera and the center point of the positioning eye And the positioning eye is perpendicular to a plane defined by a perpendicular line M of the smart terminal screen, starting from a center point of the positioning eye, and connecting between a center point of the camera and a center point of the positioning eye A line N is formed between the line K and the positioning eye perpendicular to the perpendicular line M of the smart terminal screen as the positioning line, and the line is connected with the center point of the camera and the center point of the positioning eye
  • the angle of the line K is a preset angle ⁇ .
  • the determining module is further configured to: calculate a distance L from the positioning eye to the origin according to position information of the positioning eye; and calculate, according to the position information of the positioning eye, the positioning eye is perpendicular to The length of the perpendicular line M of the smart terminal screen acquires an angle ⁇ between the positioning line and the perpendicular line M of the positioning eye perpendicular to the screen of the smart terminal; according to the vertical line M of the smart terminal screen The length of the line and the angle ⁇ between the positioning line and the vertical line M of the positioning eye perpendicular to the screen of the smart terminal are calculated to obtain the positioning line length S; according to the distance L of the positioning eye to the origin Calculating the angle between the positioning line and the line K between the center point of the camera and the center point of the positioning eye and the length S of the positioning line to obtain the positioning line and the screen of the smart terminal.
  • intersection point P is the distance a from the origin and the intersection of the positioning line on the screen of the smart terminal is P, the angle connecting the line connecting the origin with the y-axis of the origin, and the positioning line is calculated.
  • the location of the intersection P of the smart terminal screen is the distance a from the origin and the intersection of the positioning line on the screen of the smart terminal is P, the angle connecting the line connecting the origin with the y-axis of the origin, and the positioning line is calculated.
  • the receiving module includes an eye receiving submodule and a voice receiving submodule.
  • the eye receiving submodule is configured to acquire an eye motion of the user, and the eye is configured.
  • the action is compared with a preset relationship between the eye action and the control command, and the control instruction corresponding to the eye action is determined;
  • the voice receiving submodule is configured as Acquiring the voice information of the user, comparing the voice information with the preset voice information and the control command correspondence, and determining the control instruction corresponding to the voice information.
  • the eye movements include at least one of a time when the eye gaze at the screen, a rotation of the eyeball, a time when the eye is closed, a frequency of blinking, and a translation of the eyeball.
  • the preset action of the eye action and the control instruction includes at least one of the following: when the user eyes gaze at an object of the screen for more than the first preset time, the eye action corresponds to a “determine” control command; when the user’s eyeball When rotating clockwise, the eye movement corresponds to the "next page” control command in the page with the page turning, and the “slide down” control command in the page with the slider; when the user's eyeball rotates counterclockwise, The eye movement corresponds to “previous page” in the page with page turning, and “sliding up” in the page with the slider; when the left eye of the user closes the eye for more than the second preset time, the eye action corresponds to In order to save the picture or the link collection at the location; when the user's right eye closes the eye for more than the third preset time, the eye action corresponds to deleting the file or other file of the position; when the eyes are closed for more than the fourth preset time, The eye action corresponds to automatically opening the search
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute at least one of the methods described in Embodiments 1 to 2.
  • the storage medium may be a storage medium such as a magnetic disk, a DVD, an optical disk, a mobile hard disk, or a USB flash drive, and is specifically a non-transitory storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明提供的一种智能终端的控制方法和装置,属于通信领域。该智能终端控制方法,先获取用户眼睛的位置信息,根据位置信息确定用户眼睛对应在屏幕上的位置,将位置处的对象确定为控制对象;然后接收用于控制所述控制对象的控制指令,并根据该控制指令对控制对象执行相应的控制操作。釆用本申请的方法能够通过用户的眼睛位置信息来定位选择控制对象,这样就能解放双手,并且避免了现有中通过手来触摸选择控制对象的不准确,以及在某些场景下,不便于用手来进行操作,例如手上有水或污渍的情况,以及双手操作屏幕的繁瑣的问题。可选地,也可以通过眼部的活动,緩解眼部疲劳,增强用户体验。本发明还同时公开了一种移动终端和计算机存储介质。

Description

智能终端的控制方法、装置和计算机存储介质 技术领域
本发明涉及通信领域,特别涉及一种智能终端的控制方法、装置和计算机存储介质。
背景技术
随着目前手机等智能终端的屏幕越来越大,单手操作屏幕的难度也越来越大,而且针对目前的触摸屏,单手操作容易引起误操作,例如触摸不准确,以及有时手上有水或污渍不便于在触摸屏上进行操作;若使用双手操作,则显得越来越繁琐,用户体验也越来越差。如何解放双手,通过其他方式来便于用户进行智能终端的操作成为急需解决的问题。
发明内容
本发明实施例期望提供一种智能终端控制方法、装置和计算机存储介质,能够解决现有通过手来对智能终端进行操作存在操作不准确,不便操作和操作繁琐的问题。
本发明实施例第一方面提供一种智能终端的控制方法,包括:
获取用户眼睛的位置信息;
根据所述位置信息确定用户眼睛对应在屏幕上的位置;
将所述位置处的对象确定为控制对象;
接收用于控制所述控制对象的控制指令;
根据所述控制指令对所述控制对象执行相应的控制操作。
在本发明的一种实施例中,所述获取用户眼睛的位置信息包括获取用户双眼的位置信息;所述根据所述位置信息确定用户眼睛对应在屏幕上的 位置包括:
根据所述双眼位置信息确定双眼垂直投射在所述智能终端屏幕所在平面的两个投射点;
将所述两个投射点之间连线的预设位置确定为用户眼睛对应在屏幕上的位置。
在本发明的一种实施例中,所述两个投射点之间连线的预设位置包括:所述两个投射点之间连线的中间位置,或所述两个投射点之间连线距离其中一个投射点三分之一处位置。
在本发明的一种实施例中,所述获取用户眼睛的位置信息包括:将用户双眼中的一个眼睛作为定位眼睛,获取用该定位眼睛的位置信息;
所述根据所述位置信息确定用户眼睛对应在屏幕上的位置包括:
获取所述智能终端屏幕正面的摄像头与所述定位眼睛之间的连线K;
根据所述连线K确定所述定位眼睛投射到所述智能终端屏幕上的定位线;
将所述定位线与所述智能终端屏幕的交点P所在位置作为所述定位眼睛对应在屏幕上的位置。
在本发明的一种实施例中,所述根据所述连线K确定所述定位眼睛投射到所述智能终端屏幕上的定位线包括:
设置所述摄像头的中心点为原点,所述摄像头与所述定位眼睛之间的连线为所述摄像头中心点与所述定位眼睛的中心点之间的连线K;
确定所述定位眼睛垂直于所述智能终端屏幕所在平面的垂线M;
在所述连线K和所述垂线M确定的面上,以所述定位眼睛的中心点为起点,在所述连线K和所述垂线M之间做一条直线N作为所述定位线,该直线与所述连线K的夹角为预设夹角α。
在本发明的一种实施例中,计算所述定位线与所述智能终端屏幕的交 点P包括:
根据所述定位眼睛的位置信息和所述原点计算得到所述连线K的距离L;
根据所述定位眼睛的位置信息计算得到所述垂线M的长度,获取所述定位线与所述垂线M之间的夹角γ;根据所述垂线M的长度和所述定位线与所述夹角γ计算得到所述定位线的长度S;
根据所述连线K的距离L、所述夹角α和所述定位线的长度S计算得到所述交点P到所述原点的距离a;
获取所述交点P与所述原点连接的直线与所述原点y轴的夹角β;
根据所述距离a和所述夹角β计算得到所述交点P的位置。
在本发明的一种实施例中,所述接收用于控制所述控制对象的控制指令包括:
获取所述用户的眼睛动作,将所述眼睛动作与预先设置眼睛动作与控制指令对应关系进行比较,确定所述眼睛动作对应的控制指令;
获取所述用户的语音信息,将所述语音信息与预先设置语音信息与控制指令对应关系进行比较,确定所述语音信息对应的控制指令。
在本发明的一种实施例中,所述眼睛动作包括眼睛凝视屏幕的时间、眼珠的转动、眼睛闭眼的时间、眨眼的频率和眼珠的平移中的至少一种;所述预先设置眼睛动作与控制指令对应关系包括以下至少一种:
当用户眼睛凝视屏幕的某个对象超过第一预设时间时,该眼睛动作对应为“确定”控制指令;
当用户眼珠顺时针旋转时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;
当用户眼珠逆时针转动时,该眼睛动作在有翻页的页面内对应为“上 一页”,在有滑块的页面内对应为“向上滑动”;
当用户左眼闭眼超过第二预设时间时,该眼睛动作对应为保存该位置的图片或者链接收藏;
当用户右眼闭眼超过第三预设时间时,该眼睛动作对应为删除该位置的文件或者其他文件;
当双眼闭眼超过第四预设时间时,该眼睛动作对应为自动打开搜索功能,或者启动语音搜索功能;
当用户左眼连续眨眼超过第一预设频率时,该眼睛动作对应为自动进入呼叫界面,并启动语音呼叫功能;
当用户右眼连续眨眼超过第二预设频率时,该眼睛动作对应为自动接听电话,并开启免提功能;
当用户眼珠向左平移时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;
当用户眼珠向右平移时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”。
本发明实施例第二方面还提供一种智能终端的控制装置,包括获取模块、确定模块、对象模块、接收模块和执行模块:
所述获取模块配置为获取用户眼睛的位置信息;
所述确定模块配置为根据所述位置信息确定用户眼睛对应在屏幕上的位置;
所述控制对象模块配置为将所述位置处的对象确定为控制对象;
所述接收模块配置为接收用于控制所述控制对象的控制指令;
所述执行模块配置为根据所述控制指令对所述控制对象执行相应的控制操作。
在本发明的一种实施例中,所述获取模块还配置为获取用户眼睛的位 置信息包括获取用户的双眼的位置信息;所述确定模块还配置为:
根据所述双眼位置信息确定双眼垂直投射在所述智能终端屏幕所在平面的两个投射点;
将所述两个投射点之间连线的预设位置确定为用户眼睛对应在屏幕上的位置。
在本发明的一种实施例中,所述获取模块还配置为将用户双眼中的一个眼睛作为定位眼睛,获取用该定位眼睛的位置信息;所述确定模块还配置为:
获取所述智能终端屏幕正面的摄像头与所述定位眼睛之间的连线K;
根据所述连线K确定所述定位眼睛投射到所述智能终端屏幕上的定位线;
将所述定位线与所述智能终端屏幕的交点P所在位置作为所述定位眼睛对应在屏幕上的位置。
在本发明的一种实施例中,所述确定模块还配置为:
设置所述摄像头的中心点为原点,所述摄像头与所述定位眼睛之间的连线为所述摄像头中心点与所述定位眼睛的中心点之间的连线K;
确定所述定位眼睛垂直于所述智能终端屏幕所在平面的垂线M;
在所述连线K和所述垂线M确定的面上,以所述定位眼睛的中心点为起点,在所述连线K和所述垂线M之间做一条直线N作为所述定位线,该直线与所述连线K的夹角为预设夹角α。
在本发明的一种实施例中,所述确定模块还配置为:
根据所述定位眼睛的位置信息和所述原点计算得到所述连线K的距离L;
根据所述定位眼睛的位置信息计算得到所述垂线M的长度,获取所述定位线与所述垂线M之间的夹角γ;根据所述垂线M的长度和所述定位线与 所述夹角γ计算得到所述定位线的长度S;
根据所述连线K的距离L、所述夹角α和所述定位线的长度S计算得到所述交点P到所述原点的距离a;
获取所述交点P与所述原点连接的直线与所述原点y轴的夹角β;
根据所述距离a和所述夹角β计算得到所述交点P的位置。
在本发明的一种实施例中,所述接收模块包括眼睛接收子模块和语音接收子模块包括:
所述眼睛接收子模块配置为获取所述用户的眼睛动作,将所述眼睛动作与预先设置眼睛动作与控制指令对应关系进行比较,确定所述眼睛动作对应的控制指令;
所述语音接收子模块配置为获取所述用户的语音信息,将所述语音信息与预先设置语音信息与控制指令对应关系进行比较,确定所述语音信息对应的控制指令。
在本发明的一种实施例中,所述眼睛动作包括眼睛凝视屏幕的时间、眼珠的转动、眼睛闭眼的时间、眨眼的频率和眼珠的平移中的至少一种;所述预先设置眼睛动作与控制指令对应关系包括以下至少一种:
当用户眼睛凝视屏幕的某个对象超过第一预设时间时,该眼睛动作对应为“确定”控制指令;
当用户眼珠顺时针旋转时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;
当用户眼珠逆时针转动时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”;
当用户左眼闭眼超过第二预设时间时,该眼睛动作对应为保存该位置的图片或者链接收藏;
当用户右眼闭眼超过第三预设时间时,该眼睛动作对应为删除该位置 的文件或者其他文件;
当双眼闭眼超过第四预设时间时,该眼睛动作对应为自动打开搜索功能,或者启动语音搜索功能;
当用户左眼连续眨眼超过第一预设频率时,该眼睛动作对应为自动进入呼叫界面,并启动语音呼叫功能;
当用户右眼连续眨眼超过第二预设频率时,该眼睛动作对应为自动接听电话,并开启免提功能;
当用户眼珠向左平移时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;
当用户眼珠向右平移时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”。
本发明实施例第三方面提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本发明实施例第一方面所述的方法的至少其中之一。
本发明实施例的有益效果是:
本发明实施例提供的一种智能终端的控制方法和装置和计算机存储介质,先获取用户眼睛的位置信息,根据位置信息确定用户眼睛对应在屏幕上的位置,将位置处的对象确定为控制对象;然后接收用于控制所述控制对象的控制指令,并根据该控制指令对控制对象执行相应的控制操作。采用本申请的方法能够通过用户的眼睛位置信息来定位选择控制对象,这样就能解放双手,并且避免了现有中通过手来触摸选择控制对象的不准确,以及在某些场景下,不便于用手来进行操作,例如手上有水或污渍的情况,以及双手操作屏幕的繁琐的问题。可选地,也可以通过眼部的活动,缓解眼部疲劳,增强用户体验。
附图说明
图1为本发明实施例一提供的智能终端控制的方法流程示意图;
图2为本发明实施例二提供的智能终端控制的方法流程示意图;
图3为本发明实施例二提供的智能终端控制的方法中涉及的智能终端整体结构示意图;
图4为本发明实施例二提供的双眼定位原理立体示意图;
图5为本发明实施例三提供的智能终端的控制装置第一种结构示意图;
图6为本发明实施例三提供的智能终端的控制装置第二种结构示意图。
具体实施方式
以下结合附图对本发明的优选实施例进行详细说明,应当理解,以下所说明的优选实施例仅用于说明和解释本发明,并不用于限定本发明。
本申请基于模拟双眼定位算法,通过特定的图像采集设备,例如摄像头,采集用户眼睛的动作和用户眼睛对焦在智能终端的情况,通过模拟算法可以计算出用户眼睛对应在屏幕上的位置。如果将图像采集设备集成在终端上面,当用户在使用终端时,图像采集设备采集到用户眼睛的位置信息,计算出用户眼睛对应定位在在屏幕的位置,以达到模拟手指定位的效果,接收用户的控制指令进行操作,该控制指令可以通过眼睛的动作确定执行方式,以此来达到代替手指的操作,解放双手。为使本领域技术人员更好地理解本发明的技术方案,下面结合附图和具体实施方式对本发明作可选地详细描述。
实施例一:
本实施例的智能终端的控制方法,如图1所示,包括以下步骤:
步骤S101:获取用户的眼睛位置信息;
在该步骤中,智能终端获取用户的眼睛位置信息可以通过图像采集设备获取用户眼睛的图像数据,通过图像数据解析出用户眼睛的位置信息, 也可以通过感应设备直接获取用户眼睛的位置信息。具体用户的眼睛的位置信息包括用户眼睛距离智能终端屏幕的距离,眼睛的坐标信息,眼睛与智能终端的夹角等信息。这里的智能终端可包括手机,平板,电脑等。这里获取用户的眼睛位置信息,可以是获取用户的双眼的位置信息,也可以是获取用户的双眼其中的一个眼睛的位置信息,具体获取情况根据具体情况而定。
步骤S102:根据位置信息确定用户眼睛对应在屏幕上的位置;
在该步骤中,人的双眼可以起到定位物体位置的作用,如果每个眼睛的视线可以认为是一条直线的话,两只眼睛视线的直线必将在屏幕的某位置汇合。在知道用户眼睛的位置信息后,就可以根据用户眼睛的位置与智能终端之间的关系,来确定用户眼睛在智能终端屏幕定位的位置,即用户眼睛盯着屏幕的位置,将该位置作为用户眼睛对应在屏幕上的位置。
例如,我们阅读小说时,双眼在屏幕的定位位置一般都会随小说文字的位置而变化。在该步骤中,具体可以通过直接通过智能终端的图像采集装置或者智能装置的至少一个摄像头能够获取到用户眼睛的视线,根据用户眼睛视线能够定位在智能终端屏幕的位置。也可以获取眼睛的位置信息,根据眼睛的位置信息来得到用户眼睛视线定位在智能终端屏幕的位置。当然,还可以采取其他方式得到用户眼睛的视线定位在智能终端屏幕的位置。值得注意的是,这里的视线是指将用户的眼睛到智能终端屏幕作为一条虚拟的直线。
步骤S103:将位置处的对象确定为控制对象;
在该步骤中,在确定用户眼睛对应在智能终端屏幕的位置,判断在位置处,是不是存在对象,该对象可以是功能图标、图片文件等,应该理解为能够进行操作的对象都可以。如果存在功能图标,就判断是否要对该功能图标进行操作。例如双眼定位在屏幕的位置有一个“下一页”的功能图 标,那么确定“下一页”为控制对象。如果要进入下一页,就链接下一页。应该理解为,就等同于用手触摸了该功能图标或者链接一次。如果是图片,可以是对图片的放大或缩小,以及删除、保存等操作。如果是文件,可以是打开、复制、删除等操作。
步骤S104:接收用户控制控制对象的控制指令;
在该步骤中,触发控制指令的方式可以包括语音控制方式、手动控制方式和眼睛控制方式。应该理解为,现有的各种触发控制指令的方式都可用来触发本申请的控制指令。
步骤S105:根据控制指令对控制对象执行相应的控制操作。
在该步骤中,具体根据步骤S104中触发的控制指令对步骤S103的控制对象执行相应的控制操作。例如,在步骤S103中,选择了“下一页”,在步骤S104中,触发了确定执行“下一页”的控制指令。那么可以通过来触发下“下一页”用户说出“下一页”就链接打开下一页;例如有些手机设置有上下的硬件按钮,此时也可通过上下的硬件按钮来链接打开下一页。
可选地,在上述步骤S102中,获取用户眼睛的位置信息包括获取用户的双眼的位置信息,然后根据双眼位置信息确定用户眼睛对应在屏幕上的位置可以为根据双眼位置信息确定双眼垂直投射在智能终端屏幕所在平面的两个投射点;将两个投射点之间连线的预设位置确定为用户眼睛对应在屏幕上的位置。其中两个投射点之间连线的预设位置包括:两个投射点之间连线的中间位置,或两个投射点之间连线距离其中一个投射点三分之一处位置。当然其他的预设位置不仅如此,具体可以根据具体情况进行设置。
可选地,在上述步骤S102中,获取用户眼睛的位置信息还包括获取用户的双眼其中一个眼睛的位置信息,然后根据位置信息确定用户眼睛对应在屏幕上的位置可以为:将用户双眼其中一个眼睛作为定位眼睛;获取智能终端屏幕正面的摄像头与定位眼睛之间的连线K;根据连线K确定定位 眼睛投射到智能终端屏幕上的定位线;将定位线与智能终端屏幕的交点P所在位置作为定位眼睛对应在屏幕上的位置。其中,根据连线K确定定位眼睛投射到智能终端屏幕上的定位线具体可以为设置摄像头的中心点为原点,摄像头与定位眼睛之间的连线为摄像头中心点与定位眼睛的中心点之间的连线K;确定定位眼睛垂直于智能终端屏幕所在平面的垂线M;在摄像头中心点与定位眼睛的中心点之间的连线K和定位眼睛垂直于智能终端屏幕的垂线M确定的面内,以定位眼睛的中心点为起点,在摄像头中心点与定位眼睛的中心点之间的连线K和定位眼睛垂直于智能终端屏幕的垂线M之间做一条直线N作为定位线,该直线与摄像头中心点与定位眼睛的中心点之间的连线K的夹角为预设夹角α。其中,α值小于连线K和垂线M之间的夹角。具体计算定位线与智能终端屏幕的交点P可以为根据定位眼睛的位置信息计算得到定位眼睛到原点的距离L;根据定位眼睛的位置信息计算得到定位眼睛垂直于智能终端屏幕的垂线M的长度,获取定位线与定位眼睛垂直于智能终端屏幕的垂线M之间的夹角γ;根据智能终端屏幕的垂线M的长度和定位线与定位眼睛垂直于智能终端屏幕的垂线M之间的夹角γ计算得到定位线长度S;根据定位眼睛到原点的距离L、定位线与摄像头中心点与定位眼睛的中心点之间的连线K的夹角α和定位线的长度S计算得到定位线与智能终端屏幕的交点P到原点的距离a;获取定位线在智能终端屏幕的交点为P与原点连接的直线与原点y轴的夹角β;根据定位线在智能终端屏幕的交点为P到原点的距离a和定位线在智能终端屏幕的交点为P与原点连接的直线与原点y轴的夹角β计算得到定位线在智能终端屏幕的交点P的位置。当然,具体的α夹角的值可以根据具体经验值进行设置,并且不仅仅限于根据该上述方式确定定位线和计算得到定位线在智能终端屏幕的交点P的位置,其他方式也可实现。
可选地,在上述步骤S103中,优先的触发控制指令可包括获取用户的 眼睛动作,将眼睛动作与预先设置眼睛动作与控制指令对应关系进行比较,确定眼睛动作对应的控制指令,触发眼睛动作对应的控制指令。具体可以理解为在获取双眼位置信息同时根据眼睛位置的变化来获取眼睛的动作,这样就可以快速的获取眼睛的动作,与预先设置好的眼睛动作与控制指令之间对应的关系进行比较,确定眼睛动作对应的控制指令,触发控制指令。还可以为通过感应设备监测眼睛的动作,将具体监测到的眼睛动作与预先设置的眼睛动作与控制指令的关系进行比较,确定眼睛动作对应的控制指令,触发控制指令。还可以获取用户的语音信息,将语音信息与预先设置语音信息与控制指令对应关系进行比较,确定语音信息对应的控制指令,触发语音信息对应的控制指令。这里具体实施方式可以采用现有的语音控制的操作方式。
可选地,在本实施例中的眼睛动作可以包括眼睛凝视屏幕的时间、眼珠的转动、眼睛闭眼的时间、眨眼的频率和眼珠的平移的至少一种。应该理解为眼睛的动作与控制指令匹配是根据用户自己喜好进行设置的。可选地,当用户眼睛凝视屏幕的某个对象超过第一预设时间时,该眼睛动作对应为“确定”控制指令;当用户眼珠顺时针旋转时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;当用户眼珠逆时针转动时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”;当用户左眼闭眼超过第二预设时间时,该眼睛动作对应为保存该位置的图片或者链接收藏;当用户右眼闭眼超过第三预设时间时,该眼睛动作对应为删除该位置的文件或者其他文件;当双眼闭眼超过第四预设时间时,该眼睛动作对应为自动打开搜索功能,或者启动语音搜索功能;当用户左眼连续眨眼超过第一预设频率时,该眼睛动作对应为自动进入呼叫界面,并启动语音呼叫功能;当用户右眼连续眨眼超过第二预设频率时,该眼睛动作对应为自动 接听电话,并开启免提功能。当用户眼珠向左平移时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;当用户眼珠向右平移时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”。值得注意的是,这里的第一预设时间、第二预设时间、第三预设时间、第三预设时间、第一预设频率和第二预设频率都可以根据用户的喜欢进行相应的设置,而不仅仅限于本申请实施例中的具体值,并且眼睛动作与控制制冷的对应关系也可以根据用户的喜欢进行相应的设置。例如,当用户凝视屏幕中的某个链接或者图标超过2秒时,图形处理单元可以将此动作解析为“确认”指令,再将此指令以及眼部对焦在屏幕中的坐标发送给终端的处理器,处理器接受此指令后,就会操作链接该坐标处的图标,等同于用手触摸了改图标或者链接一次。此眼部动作还可以预定义多种,通过图形处理单元即可采集,然后输入相应控制指令,即可完成命令的解析。优选的,眼部动作以及被解析的命令包括如下范围:当用户凝视屏幕的某个连接或者图标超过2秒时,用户的此动作会被解析为“确认”指令;当用户眼睛顺时针旋转时,在有翻页的页面内可以解析为“下一页”,在有滑块的页面内解析为向下滑动;当用户眼睛逆时针转动时,在有翻页的页面内解析为“上一页”,在有滑块的页面内解析为“向上滑动”;当用户左眼闭眼超过2秒时,解析为保存该位置的图片或者链接收藏;当用户右眼闭眼超过2秒时,删除该位置的文件或者其他文件;当双眼闭眼超过2秒时,自动打开搜索功能,或者启动语音搜索功能;当用户左眼连续眨眼超过2次时,自动进入呼叫界面,并启动语音呼叫功能;当用户右眼连续眨眼超过2次时,自动接听电话,并开启免提功能;当眼睛向左平移时,可以解析为“下一页”;当眼睛向右平移时,可以解析为“上一页”。
实施例二:
本实施例提供的智能终端控制的方法,通过图像采集设备去获取眼睛的位置信息,优选的该图像采集设备可以为摄像头,通过模拟计算模块来完成确定用户眼睛对应在屏幕上的位置。如图2所示,包括以下步骤:
步骤S201:流程开始;
步骤S202:启动图像采集设备,去监测用户双眼定位的位置,以及眼部动作,由模拟计算模块计算出用户定位的位置对应在屏幕上的坐标,并解析眼部动作所定义的指令;
在给步骤中,如图3所示,将终端的平面设置成一个坐标图,以图像采集设备302的中心点作为平面的原点G(0,0),当然也可选择其他点作为原点,所以终端的屏幕303的每个像素点都有自己的坐标,都包含在整个坐标系内。图3中的301为手机的接收器receiver,304则为触摸按键。
图4则展示了本方案的基本原理,当用户的双眼阅读屏幕中的内容时,图像采集设备实时监测用户双眼的变化,同时将图像传至模拟计算单元,实时的模拟计算出用户双眼对焦在屏幕上的位置P,并将P的坐标同时传送至CPU,与此同时,图像采集设备302会实时采集用户的眼睛的动作,如凝视屏幕的某个位置P(x,y,0)超过1S,模拟计算单元会将此眼部动作解析为预置的指令,反馈至CPU,CPU会执行相应的指令。
用户双眼定焦在屏幕上的位置P点坐标的计算方式如下,以用户右眼402的定位位置作为基准,即将右眼402作为定位眼睛。通过图像采集设备中的红外测距模块,可以计算出右眼距离图像采集设备的距离L,以及右眼距离对焦点P的距离S。这里是以眼睛的边缘某一点为直线的起点,当然也可以以眼睛的中心点为起点。首先,在测距开始前,图像采集设备会对用户的双眼位置进行确定,方法如下:如果以右眼作为测距基准,终端会提示用户右眼连续眨眼两次确定是否为右眼,同时会虚拟出图像采集设备与右眼对焦的直线,此直线是为了算法描述所虚拟出的,当用户眼睛挪动时, 就相当于直线在不停的变换。此虚拟出的直线与图像采集设备选定的y轴的夹角β、以及L和S之间的夹角α可以通过图像采集设备模拟计算出。所有的算法都是基于如上所述的前提:通过图像采集设备,在以终端的屏幕作为水平面的基础上,可以模拟出双眼的聚焦的趋势,即图像采集设备是可以将双眼的对焦点的方向模拟出来的,如果每个眼睛的视线可以认为是一条直线的话,两只眼睛的直线必将在屏幕的某位置汇合。以下算法就是说明,双眼汇合在终端屏幕上该点的算法。要计算双眼对焦的位置,必须以其中的一只眼睛作为基准,即前面所示的右眼作为基准点。图4所示,用户右眼402的位置与终端屏幕的夹角为α与P点之间虚拟的直线可以认为是人的右视线,其与采集设备跟人眼之间的虚拟直线L的夹角为α,此夹角可以通过图像采集设备采集出来,且402与y轴的夹角β也可以通过图像采集设备计算出来。通过图像采集设备的红外测距功能可以计算出402与原点G的距离L,通过夹角α可以计算出P点距离原点距离:
a=L2+S2-2LS*cosα
由于S是未知,需要先得到S的数值。因为人的右眼E2对应在立体中的坐标为E2(x2,y2,z2),如图4所示,假设右眼E2投影在终端屏幕上的点为E2’,通过图像采集设备可以模拟计算出Z轴与右眼视线(即定位线)之间的夹角为γ。因此可以通过z2,γ,可以计算出右眼视线的距离S。
即S=z2/cosγ
所以,a=L2+S2-2LS*cosα=L2+(z2/cosγ)2-2L(z2/cosγ)*cosα
由此可以得出,P点的平面坐标:
x=a*sinβ
y=a*cosβ
由此可以得出P点坐标为([L2+(z2/cosγ)2-2L(z2/cosγ)*cosα]*sinβ,[L2+(z2/cosγ)2-2L(z2/cosγ)*cosα]*cosβ,0),用户的双眼实 时对焦的坐标即可通过此公式计算出来,返回给CPU进行处理。
步骤S203:判定用户眼部的动作是否有效,如用户只是正常的眨眼,则由此步骤进行滤除,流程自动返回步骤S202,当用户的眼部动作与预置的动作符合时,步骤S403会指示进行下一步操作;
步骤S204是模拟计算模块对眼部动作进行解析的过程,如果用户眼部动作与预置动作匹配成功,模拟计算模块会同时将用户定焦的位置在屏幕中的坐标计算出来,一同发送至CPU进行处理;
步骤S205是CPU接收到图像处理单元发送过来的指令以及坐标值,解析完成后,对该坐标系的图标进行确认操作,即等同于用户用手指完成操作。
例如,用户在凝视屏幕的某一段P点时,此时P点为某个链接,图像采集设备将该动作传送至模拟计算单元处理,解析为确认的命令,发送至CPU,此时CPU就链接该地址。因此,通过眼部的动作代替了用手去触摸的一系列动作,使得用户也释放了双手,用户体验也大大提升。类似的眼部动作可以预置很多,如右眼闭合,解析为向右滑动;左眼闭合,解析为向左滑动;双眼顺时针转圈,解析为翻页等等。有了以上的功能,用户在阅读的同时,既可以通过双眼的动作代替双手的操作,又能够不断的通过眼部的活动舒缓眼部疲劳,对用户的双眼也有保护的功能。
实施例三:
本发明实施例还提供一种智能终端控制的装置,如图5所示该控制装置包括获取模块、定位模块、对象模块、接收模块和执行模块:所述获取模块用于获取用户眼睛的位置信息;所述确定模块配置为根据所述位置信息确定用户眼睛对应在屏幕上的位置;所述控制对象模块配置为将所述位置处的对象确定为控制对象;所述接收模块配置为接收用于控制该控制对象的控制指令;所述执行模块配置为根据所述控制指令对所述控制对象执 行相应的控制操作。
所述获取模块具体接口可包括摄像头或照相机等具体结构用户确定所述用户位置信息,所述获取模块还可以是现有技术中的瞳孔定位结构或眼球追踪结构或眼动仪等具体结构。
所述定位模块、对象模块和执行模块具体的结构可包括具有信息处理的处理器;所述处理器通过执行预设的指令可完成上述模块的指定功能。所述处理器可为应用处理器AP、中央处理器CPU、微处理器MCU、数字信号处理器DSP或可编程阵列PLC等具有信息处理功能的电子元器件或电子元器件的组合。
所述接收模块可为所述装置内部的通信接口,如各种类型的总线;所述总线连接着执行模块和控制对象模块,用于传输控制指令。
可选地,所述获取模块还配置为获取用户眼睛的位置信息包括获取用户的双眼的位置信息;所述确定模块还配置为:根据所述双眼位置信息确定双眼垂直投射在所述智能终端屏幕所在平面的两个投射点;将所述两个投射点之间连线的预设位置确定为用户眼睛对应在屏幕上的位置。其中,两个投射点之间连线的预设位置包括:所述两个投射点之间连线的中间位置,或所述两个投射点之间连线距离其中一个投射点三分之一处位置。
可选地,所述获取模块还配置为获取用户眼睛的位置信息包括获取用户的双眼其中一个眼睛的位置信息;所述确定模块还配置为:将用户双眼其中一个眼睛作为定位眼睛;获取所述智能终端屏幕正面的摄像头与所述定位眼睛之间的连线K;根据所述连线K确定所述定位眼睛投射到所述智能终端屏幕上的定位线;将所述定位线与所述智能终端屏幕的交点P所在位置作为所述定位眼睛对应在屏幕上的位置。
可选地,所述确定模块还配置为:设置所述摄像头的中心点为原点,所述摄像头与所述定位眼睛之间的连线为所述摄像头中心点与所述定位眼 睛的中心点之间的连线K;确定所述定位眼睛垂直于所述智能终端屏幕所在平面的垂线M;在所述摄像头中心点与所述定位眼睛的中心点之间的连线K和所述定位眼睛垂直于所述智能终端屏幕的垂线M确定的面内,以所述定位眼睛的中心点为起点,在所述摄像头中心点与所述定位眼睛的中心点之间的连线K和所述定位眼睛垂直于所述智能终端屏幕的垂线M之间做一条直线N作为所述定位线,该直线与所述摄像头中心点与所述定位眼睛的中心点之间的连线K的夹角为预设夹角α。
可选地,所述确定模块还配置为:根据所述定位眼睛的位置信息计算得到所述定位眼睛到所述原点的距离L;根据所述定位眼睛的位置信息计算得到所述定位眼睛垂直于所述智能终端屏幕的垂线M的长度,获取所述定位线与所述定位眼睛垂直于所述智能终端屏幕的垂线M之间的夹角γ;根据所述智能终端屏幕的垂线M的长度和所述定位线与所述定位眼睛垂直于所述智能终端屏幕的垂线M之间的夹角γ计算得到所述定位线长度S;根据所述定位眼睛到所述原点的距离L、所述定位线与所述摄像头中心点与所述定位眼睛的中心点之间的连线K的夹角α和所述定位线的长度S计算得到所述定位线与所述智能终端屏幕的交点P到所述原点的距离a;获取所述定位线在所述智能终端屏幕的交点为P与所述原点连接的直线与所述原点y轴的夹角β;根据所述定位线在所述智能终端屏幕的交点为P到所述原点的距离a和所述定位线在所述智能终端屏幕的交点为P与所述原点连接的直线与所述原点y轴的夹角β计算得到所述定位线在所述智能终端屏幕的交点P的位置。
在本发明实施例还提供另外一种智能终端的控制装置,如图6所示,接收模块包括眼睛接收子模块和语音接收子模块包括:眼睛接收子模块配置为获取用户的眼睛动作,将眼睛动作与预先设置眼睛动作与控制指令对应关系进行比较,确定眼睛动作对应的控制指令;语音接收子模块配置为 获取用户的语音信息,将语音信息与预先设置语音信息与控制指令对应关系进行比较,确定语音信息对应的控制指令。
可选地,眼睛动作包括眼睛凝视屏幕的时间、眼珠的转动、眼睛闭眼的时间、眨眼的频率和眼珠的平移的至少一种。
可选地,预先设置眼睛动作与控制指令对应关系包括以下至少一种:当用户眼睛凝视屏幕的某个对象超过第一预设时间时,该眼睛动作对应为“确定”控制指令;当用户眼珠顺时针旋转时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;当用户眼珠逆时针转动时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”;当用户左眼闭眼超过第二预设时间时,该眼睛动作对应为保存该位置的图片或者链接收藏;当用户右眼闭眼超过第三预设时间时,该眼睛动作对应为删除该位置的文件或者其他文件;当双眼闭眼超过第四预设时间时,该眼睛动作对应为自动打开搜索功能,或者启动语音搜索功能;当用户左眼连续眨眼超过第一预设频率时,该眼睛动作对应为自动进入呼叫界面,并启动语音呼叫功能;当用户右眼连续眨眼超过第二预设频率时,该眼睛动作对应为自动接听电话,并开启免提功能;当用户眼珠向左平移时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;当用户眼珠向右平移时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行实施例一至二所述的方法的至少其中之一。
所述存储介质可以为磁盘、DVD、光盘、移动硬盘或U盘等存储介质,具体如为非瞬间的存储介质。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序来指令相关硬件完成,上述程序可以存储于计算机可读存储介质中,如只读存储器、磁盘或光盘等。可选地,上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块/单元可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。本发明不限制于任何特定形式的硬件和软件的结合。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡按照本发明原理所作的修改,都应当理解为落入本发明的保护范围。

Claims (16)

  1. 一种智能终端的控制方法,所述方法包括:
    获取用户眼睛的位置信息;
    根据所述位置信息确定用户眼睛对应在屏幕上的位置;
    将所述位置处的对象确定为控制对象;
    接收用于控制所述控制对象的控制指令;
    根据所述控制指令对所述控制对象执行相应的控制操作。
  2. 如权利要求1所述智能终端的控制方法,其中,所述获取用户眼睛的位置信息包括获取用户双眼的位置信息;
    所述根据所述位置信息确定用户眼睛对应在屏幕上的位置包括:
    根据所述双眼位置信息确定双眼垂直投射在所述智能终端屏幕所在平面的两个投射点;
    将所述两个投射点之间连线的预设位置确定为用户眼睛对应在屏幕上的位置。
  3. 如权利要求2所述智能终端的控制方法,其中,
    所述两个投射点之间连线的预设位置包括:所述两个投射点之间连线的中间位置,或所述两个投射点之间连线距离其中一个投射点三分之一处位置。
  4. 如权利要求1所述智能终端的控制方法,其中,
    所述获取用户眼睛的位置信息包括:将用户双眼中的一个眼睛作为定位眼睛,获取用该定位眼睛的位置信息;
    所述根据所述位置信息确定用户眼睛对应在屏幕上的位置包括:
    获取所述智能终端屏幕正面的摄像头与所述定位眼睛之间的连线K;
    根据所述连线K确定所述定位眼睛投射到所述智能终端屏幕上的定位线;
    将所述定位线与所述智能终端屏幕的交点P所在位置作为所述定位眼睛对应在屏幕上的位置。
  5. 如权利要求4所述智能终端的控制方法,其中,所述根据所述连线K确定所述定位眼睛投射到所述智能终端屏幕上的定位线包括:
    设置所述摄像头的中心点为原点,所述摄像头与所述定位眼睛之间的连线为所述摄像头中心点与所述定位眼睛的中心点之间的连线K;
    确定所述定位眼睛垂直于所述智能终端屏幕所在平面的垂线M;
    在所述连线K和所述垂线M确定的面上,以所述定位眼睛的中心点为起点,在所述连线K和所述垂线M之间做一条直线N作为所述定位线,该直线与所述连线K的夹角为预设夹角α。
  6. 如权利要求5所述智能终端的控制方法,其中,计算所述定位线与所述智能终端屏幕的交点P包括:
    根据所述定位眼睛的位置信息和所述原点计算得到所述连线K的距离L;
    根据所述定位眼睛的位置信息计算得到所述垂线M的长度,获取所述定位线与所述垂线M之间的夹角γ;根据所述垂线M的长度和所述定位线与所述夹角γ计算得到所述定位线的长度S;
    根据所述连线K的距离L、所述夹角α和所述定位线的长度S计算得到所述交点P到所述原点的距离a;
    获取所述交点P与所述原点连接的直线与所述原点y轴的夹角β;
    根据所述距离a和所述夹角β计算得到所述交点P的位置。
  7. 如权利要求1至6任一项所述智能终端的控制方法,其中,所述接收用于控制所述控制对象的控制指令包括:
    获取所述用户的眼睛动作,将所述眼睛动作与预先设置眼睛动作与控制指令对应关系进行比较,确定所述眼睛动作对应的控制指令;
    获取所述用户的语音信息,将所述语音信息与预先设置语音信息与控制指令对应关系进行比较,确定所述语音信息对应的控制指令。
  8. 如权利要求7所述智能终端的控制方法,其中,所述眼睛动作包括眼睛凝视屏幕的时间、眼珠的转动、眼睛闭眼的时间、眨眼的频率和眼珠的平移中的至少一种;所述预先设置眼睛动作与控制指令对应关系包括以下至少一种:
    当用户眼睛凝视屏幕的某个对象超过第一预设时间时,该眼睛动作对应为“确定”控制指令;
    当用户眼珠顺时针旋转时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;
    当用户眼珠逆时针转动时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”;
    当用户左眼闭眼超过第二预设时间时,该眼睛动作对应为保存该位置的图片或者链接收藏;
    当用户右眼闭眼超过第三预设时间时,该眼睛动作对应为删除该位置的文件或者其他文件;
    当双眼闭眼超过第四预设时间时,该眼睛动作对应为自动打开搜索功能,或者启动语音搜索功能;
    当用户左眼连续眨眼超过第一预设频率时,该眼睛动作对应为自动进入呼叫界面,并启动语音呼叫功能;
    当用户右眼连续眨眼超过第二预设频率时,该眼睛动作对应为自动接听电话,并开启免提功能;
    当用户眼珠向左平移时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;
    当用户眼珠向右平移时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”。
  9. 一种智能终端的控制装置,包括获取模块、确定模块、对象模块、接收模块和执行模块:
    所述获取模块配置为配置为获取用户眼睛的位置信息;
    所述确定模块配置为根据所述位置信息确定用户眼睛对应在屏幕上的位置;
    所述控制对象模块配置为将所述位置处的对象确定为控制对象;
    所述接收模块配置为接收用于控制所述控制对象的控制指令;
    所述执行模块配置为根据所述控制指令对所述控制对象执行相应的控制操作。
  10. 如权利要求9所述智能终端的控制装置,其中,
    所述获取模块还配置为获取用户眼睛的位置信息包括获取用户的双眼的位置信息;所述确定模块还配置为:
    根据所述双眼位置信息确定双眼垂直投射在所述智能终端屏幕所在平面的两个投射点;
    将所述两个投射点之间连线的预设位置确定为用户眼睛对应在屏幕上的位置。
  11. 如权利要求9所述智能终端的控制装置,其中,
    所述获取模块还配置为将用户双眼中的一个眼睛作为定位眼睛,获取用该定位眼睛的位置信息;
    所述确定模块还配置为:
    获取所述智能终端屏幕正面的摄像头与所述定位眼睛之间的连线K;
    根据所述连线K确定所述定位眼睛投射到所述智能终端屏幕上的定位线;
    将所述定位线与所述智能终端屏幕的交点P所在位置作为所述定位眼睛对应在屏幕上的位置。
  12. 如权利要求11所述智能终端的控制装置,其中,所述确定模块还配置为:
    设置所述摄像头的中心点为原点,所述摄像头与所述定位眼睛之间的连线为所述摄像头中心点与所述定位眼睛的中心点之间的连线K;
    确定所述定位眼睛垂直于所述智能终端屏幕所在平面的垂线M;
    在所述连线K和所述垂线M确定的面上,以所述定位眼睛的中心点为起点,在所述连线K和所述垂线M之间做一条直线N作为所述定位线,该直线与所述连线K的夹角为预设夹角α。
  13. 如权利要求12所述智能终端的控制装置,其中,所述确定模块还配置为:
    根据所述定位眼睛的位置信息和所述原点计算得到所述连线K的距离L;
    根据所述定位眼睛的位置信息计算得到所述垂线M的长度,获取所述定位线与所述垂线M之间的夹角γ;根据所述垂线M的长度和所述定位线与所述夹角γ计算得到所述定位线的长度S;
    根据所述连线K的距离L、所述夹角α和所述定位线的长度S计算得到所述交点P到所述原点的距离a;
    获取所述交点P与所述原点连接的直线与所述原点y轴的夹角β;
    根据所述距离a和所述夹角β计算得到所述交点P的位置。
  14. 如权利要求9至13任一项所述智能终端的控制装置,其中,所述接收模块包括眼睛接收子模块和语音接收子模块包括:
    所述眼睛接收子模块配置为获取所述用户的眼睛动作,将所述眼睛动作与预先设置眼睛动作与控制指令对应关系进行比较,确定所述眼睛动作 对应的控制指令;
    所述语音接收子模块配置为获取所述用户的语音信息,将所述语音信息与预先设置语音信息与控制指令对应关系进行比较,确定所述语音信息对应的控制指令。
  15. 如权利要求14所述智能终端的控制装置,其中,所述眼睛动作包括眼睛凝视屏幕的时间、眼珠的转动、眼睛闭眼的时间、眨眼的频率和眼珠的平移中的至少一种;所述预先设置眼睛动作与控制指令对应关系包括以下至少一种:
    当用户眼睛凝视屏幕的某个对象超过第一预设时间时,该眼睛动作对应为“确定”控制指令;
    当用户眼珠顺时针旋转时,该眼睛动作在有翻页的页面内对应为“下一页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;
    当用户眼珠逆时针转动时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”;
    当用户左眼闭眼超过第二预设时间时,该眼睛动作对应为保存该位置的图片或者链接收藏;
    当用户右眼闭眼超过第三预设时间时,该眼睛动作对应为删除该位置的文件或者其他文件;
    当双眼闭眼超过第四预设时间时,该眼睛动作对应为自动打开搜索功能,或者启动语音搜索功能;
    当用户左眼连续眨眼超过第一预设频率时,该眼睛动作对应为自动进入呼叫界面,并启动语音呼叫功能;
    当用户右眼连续眨眼超过第二预设频率时,该眼睛动作对应为自动接听电话,并开启免提功能;
    当用户眼珠向左平移时,该眼睛动作在有翻页的页面内对应为“下一 页”控制指令,在有滑块的页面内对应为“向下滑动”控制指令;
    当用户眼珠向右平移时,该眼睛动作在有翻页的页面内对应为“上一页”,在有滑块的页面内对应为“向上滑动”。
  16. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至8所述的方法的至少其中之一。
PCT/CN2014/094137 2014-10-27 2014-12-17 智能终端的控制方法、装置和计算机存储介质 WO2016065706A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410583379.3 2014-10-27
CN201410583379.3A CN105630135A (zh) 2014-10-27 2014-10-27 一种智能终端的控制方法和装置

Publications (1)

Publication Number Publication Date
WO2016065706A1 true WO2016065706A1 (zh) 2016-05-06

Family

ID=55856472

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/094137 WO2016065706A1 (zh) 2014-10-27 2014-12-17 智能终端的控制方法、装置和计算机存储介质

Country Status (2)

Country Link
CN (1) CN105630135A (zh)
WO (1) WO2016065706A1 (zh)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106354777B (zh) * 2016-08-22 2019-09-17 广东小天才科技有限公司 一种应用于电子终端的搜题方法及装置
CN106610719A (zh) * 2016-11-25 2017-05-03 奇酷互联网络科技(深圳)有限公司 快捷操作方法、装置和终端设备
CN106527737A (zh) * 2016-12-06 2017-03-22 珠海格力电器股份有限公司 一种智能终端的控制方法、控制装置及智能终端
CN106599858B (zh) * 2016-12-20 2020-06-02 北京小米移动软件有限公司 指纹识别方法、装置和电子设备
CN107122102A (zh) * 2017-04-27 2017-09-01 维沃移动通信有限公司 一种翻页控制方法及移动终端
CN107247571B (zh) 2017-06-26 2020-07-24 京东方科技集团股份有限公司 一种显示装置及其显示方法
CN108289151A (zh) * 2018-01-29 2018-07-17 维沃移动通信有限公司 一种应用程序的操作方法及移动终端
CN109445593A (zh) * 2018-10-31 2019-03-08 贵州火星探索科技有限公司 电子设备控制方法及装置
CN111857325A (zh) * 2019-04-25 2020-10-30 北京字节跳动网络技术有限公司 人机交互控制方法、装置和计算机可读存储介质
CN110120997A (zh) * 2019-06-12 2019-08-13 Oppo广东移动通信有限公司 通话控制方法及相关产品
CN111459285B (zh) * 2020-04-10 2023-12-12 康佳集团股份有限公司 基于眼控技术的显示设备控制方法、显示设备及存储介质
CN111625099B (zh) * 2020-06-02 2024-04-16 上海商汤智能科技有限公司 一种动画展示控制方法及装置
CN114237119A (zh) * 2021-12-16 2022-03-25 珠海格力电器股份有限公司 显示屏控制方法及装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436304A (zh) * 2011-11-14 2012-05-02 华为技术有限公司 一种屏幕横竖显示模式切换的方法和终端
CN103197755A (zh) * 2012-01-04 2013-07-10 中国移动通信集团公司 一种翻页方法、装置及终端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1293446C (zh) * 2005-06-02 2007-01-03 北京中星微电子有限公司 一种非接触式目控操作系统和方法
JPWO2011114564A1 (ja) * 2010-03-18 2013-06-27 富士フイルム株式会社 立体画像表示装置およびその制御方法
CN102830797B (zh) * 2012-07-26 2015-11-25 深圳先进技术研究院 一种基于视线判断的人机交互方法及系统
CN102981616B (zh) * 2012-11-06 2017-09-22 中兴通讯股份有限公司 增强现实中对象的识别方法及系统和计算机

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436304A (zh) * 2011-11-14 2012-05-02 华为技术有限公司 一种屏幕横竖显示模式切换的方法和终端
CN103197755A (zh) * 2012-01-04 2013-07-10 中国移动通信集团公司 一种翻页方法、装置及终端

Also Published As

Publication number Publication date
CN105630135A (zh) 2016-06-01

Similar Documents

Publication Publication Date Title
WO2016065706A1 (zh) 智能终端的控制方法、装置和计算机存储介质
JP6400197B2 (ja) ウェアラブル装置
US9983770B2 (en) Screen capture method, apparatus, and terminal device
US10514842B2 (en) Input techniques for virtual reality headset devices with front touch screens
US10082886B2 (en) Automatic configuration of an input device based on contextual usage
US9886086B2 (en) Gesture-based reorientation and navigation of a virtual reality (VR) interface
EP3293620A1 (en) Multi-screen control method and system for display screen based on eyeball tracing technology
WO2019214442A1 (zh) 一种设备控制方法、装置、控制设备及存储介质
KR20130081117A (ko) 이동 단말기 및 그 제어방법
WO2015149498A1 (zh) 用于屏幕模式切换的方法、装置、计算机存储介质和设备
CN107688385A (zh) 一种控制方法及装置
CN107360375B (zh) 一种拍摄方法及移动终端
WO2014201831A1 (en) Wearable smart glasses as well as device and method for controlling the same
US20170131893A1 (en) Terminal control method and device
CN108369451B (zh) 信息处理装置、信息处理方法及计算机可读存储介质
CN104754219A (zh) 一种终端
WO2022111458A1 (zh) 图像拍摄方法和装置、电子设备及存储介质
CN103186240A (zh) 一种基于高像素摄像头探测眼球运动的方法
WO2015131590A1 (zh) 一种控制黑屏手势处理的方法及终端
RU2628484C2 (ru) Способ и устройство для активации рабочего состояния мобильного терминала
US20170090744A1 (en) Virtual reality headset device with front touch screen
CN104063041A (zh) 一种信息处理方法及电子设备
EP3647925B1 (en) Portable terminal
JP5558899B2 (ja) 情報処理装置、その処理方法及びプログラム
CN113625878B (zh) 手势信息处理方法、装置、设备、存储介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14905166

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14905166

Country of ref document: EP

Kind code of ref document: A1