WO2018076609A1 - Terminal and method for operating terminal - Google Patents

Terminal and method for operating terminal Download PDF

Info

Publication number
WO2018076609A1
WO2018076609A1 PCT/CN2017/078581 CN2017078581W WO2018076609A1 WO 2018076609 A1 WO2018076609 A1 WO 2018076609A1 CN 2017078581 W CN2017078581 W CN 2017078581W WO 2018076609 A1 WO2018076609 A1 WO 2018076609A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
image
reference object
terminal
mapping
Prior art date
Application number
PCT/CN2017/078581
Other languages
French (fr)
Chinese (zh)
Inventor
魏占婷
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018076609A1 publication Critical patent/WO2018076609A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present disclosure relates to the field of human-computer interaction, and in particular, to a method and a terminal for operating a terminal.
  • the application range of a terminal with a touch screen becomes wider and wider; the touch screen on the terminal can be used as a device for realizing human-computer interaction.
  • the resistive touch screen is actually a sensor, and its structure is basically It is a superimposed structure of film and glass.
  • the adjacent side of the film and glass is coated with nano-indium tin oxide (ITO) coating.
  • ITO nano-indium tin oxide
  • ITO has good conductivity and transparency.
  • a touch response method, a device, and a wearable device of a wearable device are disclosed, so that the wearable device can feedback the touch operation effect to the user in real time, and improve the touch accuracy of the wearable device;
  • the technical solution is: acquiring position information of the target fingertip collected by the binocular recognition device in the set touch action occurrence area; determining the mapping point of the target fingertip on the screen of the wearable device according to the position information of the target fingertip Location information; the cursor is displayed at the mapped point on the wearable device screen.
  • embodiments of the present disclosure provide a method and a terminal for operating a terminal, which can implement an operation terminal without touching a screen with a finger.
  • An embodiment of the present disclosure provides a method for operating a terminal, where the method includes:
  • the reference object includes a pupil of a person
  • the image of the reference object is analyzed to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including:
  • the intersection of the line passing through the center point of the eye and the center point of the pupil is determined as a mapping relationship between the terminal display and the reference object. point.
  • the acquiring an image of the reference object in real time includes: separately acquiring the An image of the pupil;
  • the obtaining a spatial position of the pupil center point based on the image of the pupil includes: obtaining a spatial position of the pupil center point based on a spatial position of the two cameras and an image acquired by the two cameras.
  • the acquiring an image of the reference object in real time includes: acquiring an image of the reference object in real time by using at least one camera;
  • the image of the reference object is analyzed to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including:
  • a mapping point that forms a preset mapping relationship with the reference object is determined based on the projection point.
  • the determining, according to the projection point, a mapping point that forms a preset mapping relationship with the reference object includes: forming the projection point as a preset mapping with the reference object The mapping point of the relationship.
  • the number of the reference points is two;
  • Determining, according to the projection point, a mapping point that forms a preset mapping relationship with the reference object comprising: determining a projection point of the two reference points on the display screen of the terminal, and determining the connection of the two projection points
  • the midpoint serves as a mapping point that forms a preset mapping relationship with the reference object.
  • the reference object includes two eyes of a person
  • Selecting at least one point in the image of the reference object as a reference point includes: using a pupil center point of the two eyes of the person as a reference point.
  • selecting at least one point in the image of the reference object as a reference point includes:
  • a point at which the vertical distance from the terminal display screen in the reference object is the smallest is used as a reference point.
  • the acquiring an image of the reference object in real time includes: acquiring an image of the reference object in real time by using a camera;
  • the method further includes: acquiring a distance between the camera and the reference object in real time;
  • determining the spatial location of the reference point based on the image of the reference point and the spatial position of each camera comprises:
  • a spatial position of the reference point is determined based on an image of the reference point, a spatial position of the camera, and a distance between the camera and the reference object.
  • the reference object is an object having a minimum vertical distance from the display screen of the terminal, or the reference object is located in a human body.
  • the reference object includes one eye of a person; the real-time image of the reference object is collected, Including: collecting images in the human eye in real time;
  • the analyzing the image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object including: displaying the current display content of the terminal display screen and the collected human eyes
  • the image matching area is determined as: a screen matching area; a point is selected in the screen matching area as a mapping point that forms a preset mapping relationship with the reference object.
  • the method before analyzing the image of the reference object, the method further includes: determining a distance between the terminal display screen and the reference object;
  • the analyzing the image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object including: when the determined distance is in the set interval, the reference object
  • the image is analyzed to obtain a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
  • the generating the function instruction at the mapping point comprises: determining that the mapping point is in a mapping point area on a terminal display screen; generating the mapping based on the determined size of time a function instruction at a point; the mapping point area includes an initial position of a mapping point on the display screen of the terminal.
  • the generating, according to the determined size of the time, the function instruction at the mapping point including:
  • the first setting range is a range from a first time threshold to a second time threshold; the second setting range is greater than a second time threshold; the third setting The range is less than the first time threshold; the first time threshold is less than the second time threshold.
  • the generating an instruction to perform a sliding operation includes: acquiring a moving direction and a moving rate of the mapping point, and generating an indication to perform a sliding operation based on the moving direction and the moving speed of the mapping point. Instructions.
  • the generating, according to the moving direction and the moving rate of the mapping point, generating an instruction to perform a sliding screen operation includes:
  • the moving rate of the mapping point in the lateral direction of the display screen of the mobile terminal is used as the lateral moving rate of the mapping point, and the moving rate of the mapping point in the longitudinal direction of the display screen of the mobile terminal is taken as the longitudinal moving speed of the mapping point;
  • the lateral movement rate of the mapping point When the lateral movement rate of the mapping point is greater than the longitudinal movement rate of the mapping point, generating an instruction indicating that the horizontal sliding operation is performed; or, the lateral movement rate of the mapping point is greater than the longitudinal moving speed of the mapping point, and the horizontal direction of the mapping point When the moving rate satisfies the first setting condition, generating an instruction indicating that the horizontal sliding operation is performed;
  • the first setting condition is: a lateral movement rate of the mapping point is within a fourth setting range; and the second setting condition is: a longitudinal movement rate of the mapping point is at a fifth Within the setting range.
  • the mapping point area includes an initial position of a mapping point on the display screen of the terminal;
  • the area of the mapping point area is less than or equal to the set threshold.
  • the mapping point area is a circular area centered on the initial position of the mapping point.
  • the method before the generating the function instruction at the mapping point, the method further includes: continuously performing image collection on the action of the user of the terminal, and obtaining an action image of the user;
  • the motion image is image-recognized to obtain a recognition result;
  • the generating the function instruction at the mapping point comprises: generating a function instruction at the mapping point based on the recognition result.
  • the generating, according to the recognition result, the function instruction at the mapping point, when the recognition result is a blinking action, a mouth opening motion, or a mouth closing motion generating an indication click mapping An instruction at a point; when the recognition result is a nodding action, generating an instruction to perform a downward screen operation; and when the recognition result is a head-up operation, generating an instruction to perform an upward sliding operation; the recognition result is shaking the left and right At the time of the action, an instruction is generated to instruct the lateral sliding operation.
  • the function instruction at the mapping point is: an instruction to click a function instruction at a current mapping point, a function instruction to press and hold a current mapping point, or an instruction to perform a sliding operation.
  • An embodiment of the present disclosure further provides a terminal, including an image collection device and a processor;
  • the image capturing device is configured to collect an image of the reference object in real time, and the distance between the reference object and the display screen of the terminal exceeds a set value;
  • a processor configured to analyze an image of the reference object to obtain a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object; generate a function instruction at the mapping point, based on the function
  • the instructions implement the operation of the terminal display.
  • the reference object includes a pupil of a person
  • the processor is further configured to acquire a spatial position of the fovea of the eye in real time; the pupil and the fovea of the eye are located in the same eye;
  • the processor is arranged to obtain a spatial position of the pupil center point based on the image of the pupil; based on the spatial position of the eye fovea and the pupil center point, passing through the eye fovea and the pupil center point
  • the intersection of the straight line and the terminal display screen is determined as a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
  • the image collection device includes at least one camera
  • the processor is configured to select at least one point in the image of the reference object as a reference point; determine a spatial position of each reference point based on an image of each reference point and a spatial position of each camera; Determining a projection point of each reference point on the terminal display screen based on the spatial position of the point and a spatial position of the terminal display screen; and determining a mapping point that forms a preset mapping relationship with the reference object based on the projection point.
  • the reference object includes one eye of a person
  • the image capture device is configured to collect images in a human eye in real time
  • the processor is configured to determine an area in the current display content of the terminal display screen that matches an image in the collected human eye as: a screen matching area; and select a point in the screen matching area as forming with the reference object The mapping point of the preset mapping relationship.
  • the processor is further configured to determine a distance between the terminal display screen and the reference object before analyzing the image of the reference object;
  • the processor is configured to analyze an image of the reference object when the determined distance is in a set interval, and obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object.
  • the processor is configured to determine a time when the mapping point is in a setting area; and generate a function instruction at the mapping point based on the determined size of the time.
  • the image collection device is further configured to continuously perform image collection on the action of the user of the terminal before generating the function instruction at the mapping point, to obtain an action image of the user;
  • the processor is further configured to perform image recognition on the motion image of the user to obtain a recognition result; and generate a function instruction at the mapping point based on the recognition result.
  • the method and the terminal for operating the terminal collect an image of a reference object that is not in contact with the display screen of the terminal in real time; analyze the image of the reference object to obtain a reference image on the terminal display screen Forming a mapping point of the preset mapping relationship; generating a function instruction at the mapping point, and implementing an operation on the display screen of the terminal based on the function instruction; thus, analyzing the image of the reference object to obtain a display on the terminal
  • the mapping point, and then based on the function instruction at the mapping point completes the operation of clicking, long pressing, sliding screen, etc., realizes the non-touch screen operation terminal; can determine the mapping point of the fingertip, operate the mobile phone according to the fingertip trajectory, and can utilize The mapping point directly operates on the screen of the mobile phone; the user does not need to touch the screen with a finger, and the terminal can be operated only according to the image of the reference object; the present invention can solve the technical problem that the finger operating terminal caused by the size of the terminal display screen is large Effective
  • FIG. 1 is a flowchart of a method for operating a terminal according to an embodiment of the present disclosure
  • FIG. 2 is a first schematic diagram of a projection point of a reference point on a display screen of a terminal according to an embodiment of the present disclosure
  • FIG. 3 is a second schematic diagram of a projection point of a reference point on a display screen of a terminal according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart of an implementation manner of a method for operating a terminal according to an embodiment of the present disclosure
  • Figure 5 is a schematic view of the fovea and pupil of the eye in the embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a principle for determining a position of a space point by using a dual camera according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram showing a positional relationship between a human eye line of sight and a terminal display screen according to an embodiment of the present disclosure
  • FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • the embodiment of the present disclosure describes a method for operating a terminal, which may be applied to a terminal, where the terminal may be a fixed terminal or a mobile terminal having a display screen; for example, the display screen may not have a touch response function, or may have a touch Responsive touch screen;
  • mobile terminal can be a smartphone, tablet or wearable device (such as smart glasses, smart Watches, etc.) can also be smart cars, smart home appliances (such as smart refrigerators, smart batteries, set-top boxes, etc.);
  • the operating system of smart phones can be Android operating system, IOS operating system or any other third-party developed to run on a microcomputer Operating system (including at least processor and memory) (such as mobile Linux, BlackBerry QNX, etc.).
  • the terminal described above includes an image capturing device for collecting an image of a reference object, where the reference object may be an object located at the terminal, for example, the reference object may be an object such as a human eye or a nose; and the image capturing device may include at least one camera.
  • the terminal described above further includes an image analyzing device for analyzing the image of the collected reference object.
  • the image analyzing device may be a processor on the terminal.
  • FIG. 1 is a flowchart of a method for operating a terminal according to an embodiment of the present disclosure. As shown in FIG. 1 , the process includes:
  • Step 101 Acquire an image of the reference object in real time, and the distance between the reference object and the display screen of the terminal exceeds a set value.
  • the set value is greater than 0, and the set value can be set according to the actual application scenario; that is, the reference object does not form a contact relationship with the terminal display screen.
  • the kind of the reference object is not limited.
  • the reference object may include: a person's eyes, a nose, or the reference object is an object having a minimum vertical distance from the display screen of the terminal, or the like.
  • the image of the pupil may be separately collected by using at least one camera, for example, the number of cameras is 1 or 2; the camera may be disposed on the side of the terminal display, that is, the front camera is set on the terminal; the camera It can also be set on the back of the terminal, that is, the rear camera is set on the terminal.
  • Step 102 Perform an analysis on the image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object.
  • the reference includes a pupil of the person; the spatial position of the fovea of the eye is acquired in real time before the image of the reference is analyzed; the pupil and the fovea of the eye are located in the same eye; in step 101, two are utilized
  • the camera separately captures an image of the pupil.
  • the step includes: obtaining a spatial position of the pupil center point based on the image of the pupil; and a line and a terminal passing through the center of the eye and the center point of the pupil based on the spatial position of the central fovea and the pupil center point
  • the intersection of the display screen is determined as a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
  • the above two cameras can be used to separately collect the image of the concave center of the eye, or the two cameras can be used to separately collect the image of the pupil center point; here, the image of the eye can be collected, and then the image recognition or image matching technology is utilized. An image of the pupil of the eye is obtained, and finally an image of the pupil center point is determined based on the image of the pupil of the eye.
  • the spatial position of the pupil center point is obtained based on the image of the pupil, including: obtaining a space of a pupil center point based on a spatial position of the two cameras and an image acquired by the two cameras position.
  • each camera can be represented by three-dimensional space coordinates, in actual implementation, The coordinates of a point on the terminal can be set in advance, and then the three-dimensional space coordinates of each camera are determined according to the positional relationship between the point and each camera.
  • the spatial position of the pupil center point can be determined based on binocular stereo vision technology.
  • the determined mapping point is the intersection of a line of sight of the human eye and the display of the terminal.
  • the image of the reference object is collected in real time by using at least one camera.
  • the type of the reference object is not limited.
  • the reference object is an object having the smallest vertical distance from the display screen of the terminal, or the reference The object is located in the human body.
  • the step may include: selecting at least one point in the image of the reference object as a reference point; determining a spatial position of each reference point based on the image of each reference point and the spatial position of each camera; Determining a projection point of each reference point on the terminal display screen according to a spatial position of the reference point and a spatial position of the terminal display screen; and determining a mapping point that forms a preset mapping relationship with the reference object based on the projection point.
  • selecting a point in the image of the reference object as a reference point comprises: determining a spatial location of the reference object based on an image of the reference object and a spatial position of each camera; based on the reference The spatial position of the object is used as a reference point to minimize the vertical distance between the reference object and the terminal display screen.
  • the reference point is a point at which the vertical distance from the display screen of the terminal is the smallest, for example, when the reference object is a human nose, the reference point is a nose tip;
  • the reference point is the fingertip of the finger.
  • the distance between the camera and the reference object may be acquired in real time before analyzing the image of the reference object; based on the image of the reference point, the spatial position of the camera And determining a spatial position of the reference point by a distance between the camera and the reference object.
  • a point may be selected as an reference point in the image of the reference object; based on the image of the reference point and the spatial position of the two cameras, the spatial position of the reference point is determined; Determining a projection point of the reference point on the display screen of the terminal by using a spatial position of the reference point and a spatial position of the display screen of the terminal; and forming the projection point as a preset on the terminal display screen and the reference object The mapping point of the mapping relationship.
  • the distance sensing device or the distance detecting device may be disposed at a position on the terminal that does not exceed the first distance threshold from the camera, and the distance sensing device or the distance detecting device is configured to detect the distance between itself and the reference object, and the distance sensing device Or the distance detecting device may be a displacement sensor or a proximity sensor provided on the terminal.
  • the detected distance can be used as the distance between the camera and the reference object.
  • the position of the terminal display screen may be determined according to the spatial position of each camera on the terminal and the relative positional relationship between each camera and the display screen of the terminal; each reference point is displayed on the terminal display screen with reference to the accompanying drawings. The projection point is visualized.
  • FIG. 2 is a first schematic diagram of a projection point of a reference point on a display screen of a terminal according to an embodiment of the present disclosure.
  • the camera a and the camera b are two cameras disposed on the terminal, where the camera a and the camera b are located.
  • Plane and terminal display The plane of the screen coincides.
  • the coordinates of the reference point can be expressed as (X, Y, Z).
  • the projection point of the reference point on the terminal display is: the intersection of the line perpendicular to the plane of the terminal display and the terminal display. .
  • both the sensing point a and the sensing point b may be a camera disposed on the terminal, and the plane of the terminal display screen
  • the sensing layer represents the terminal display screen
  • the coordinates of the reference point can be expressed as (X', Y', Z')
  • the projection points of the reference point on the sensing layer are: over reference point and sensing The intersection of the line perpendicular to the plane and the sensing layer.
  • the projection point may be used as a mapping point for forming a preset mapping relationship with the reference object; or, when the reference point number is 2, determining two reference points on the terminal display screen
  • the projection point is a mapping point of a line connecting the determined two projection points as a mapping point that forms a preset mapping relationship with the reference object.
  • the reference object includes two eyes of a person; the pupil center points of the two eyes of the person are respectively used as reference points; and then, the pupil center point of the two eyes of the person is determined at the terminal display screen
  • the projection point on the upper point of the line connecting the determined two projection points is used as a mapping point for forming a preset mapping relationship with the reference object.
  • the reference object is one eye of a person; in step 101, an image in the human eye is collected in real time.
  • the step may include: determining, in the current display content of the terminal display, an area matching the collected image in the human eye as: a screen matching area; selecting a point in the screen matching area as the reference object A mapping point that forms a preset mapping relationship.
  • the distance between the terminal display screen and the reference object may also be acquired; when the acquired distance is in the set interval, step 102 is performed; otherwise, If the acquired distance is not within the set interval, the process ends directly.
  • the setting section is a section for indicating a distance, for example, the setting section is [0 cm, 5 cm], or [30 cm, 50 cm] or the like.
  • the cursor may also be displayed at the mapping point of the terminal display screen for the user to observe.
  • Step 103 Generate a function instruction at the mapping point, and implement an operation on the display screen of the terminal based on the function instruction.
  • the function instruction at the mapping point generated by the step may be: a function instruction indicating that the current mapping point is clicked, a function instruction indicating that the current mapping point is long pressed, an instruction to perform a sliding operation, and the like.
  • the instruction indicating the performing the sliding operation may be an instruction indicating a vertical sliding operation or an instruction indicating a horizontal sliding operation; the instruction indicating the vertical sliding operation may be an instruction or an instruction indicating that the upward sliding operation is performed to slide down
  • the instruction of the screen operation, indicating that the instruction to perform the landscape sliding operation may be an instruction to perform a left-slide operation or an instruction to perform a right-slide operation.
  • the function instructions at the mapping point may be generated in the following two manners.
  • a first mode determining a time when the mapping point is in a mapping point area on the terminal display screen; generating a function instruction at the mapping point based on the determined time size; the mapping point area includes The initial position of the mapped point.
  • the time at which the mapping point is in the mapping point region may be determined by the processor of the terminal; afterwards, the processor of the terminal may generate a function instruction at the mapping point based on the determined size of the time.
  • the mapping point area may be an area on the display screen of the terminal including the initial position of the mapping point; the area of the mapping point area is less than or equal to the set threshold.
  • the mapping point area may be It is an area including the point A that is smaller than the set threshold, and the threshold is set to be 0.2 cm 2 , 0.3 cm 2 , etc.; the shape of the boundary of the set area may be a circle, an ellipse, a polygon, or the like.
  • the mapping point area may be a circular area centered on the initial position of the mapping point.
  • the mapping point area may be a circular area with the initial length of the mapping point as a center and a set length as a radius. .
  • generating, according to the determined size of the time, the function instruction at the mapping point comprising: when the determined time is in the first setting range, generating an instruction indicating that the current mapping point is clicked; the determined time When in the second setting range, generating an instruction to long press the current mapping point; when the determined time is in the third setting range, generating an instruction to perform a sliding operation; the first setting range, the second There is no overlap between the set range and the third set range, that is, there is no intersection between the first set range, the second set range, and the third set range.
  • the first set range is represented as interval 1
  • the second set range is represented as interval 2
  • the third set range is represented as interval 3
  • interval 1 interval 2
  • interval 3 are both used to represent the range of values of time.
  • Each interval may be an open interval, a closed interval, or a half open half-closed interval, but there is no intersection between the interval 1, the interval 2, and the interval 3.
  • the time when the mapping point is in the setting area can only be in the above three setting ranges. Within a setting range, this ensures that at most one function instruction is generated.
  • the first setting range is [a1, a2], wherein a1 represents a first time threshold, a2 represents a second time threshold, a1 is less than a2, and the second setting range is (a2, ⁇ ), the third setting range is (0, a1).
  • the first time threshold a1 is 5 seconds
  • the second time threshold a2 is 10 seconds
  • the mapping point area is a circular area centered on the initial position of the mapping point and having a radius of 0.3 cm
  • the mapping point area is a mapping The equivalent range of the initial position of the point; since the image of the reference object may change, the position of the mapping point also changes accordingly, so that the time of the mapping point in the mapping point area can be recorded by the processor of the terminal, at the mapping point
  • the duration of the mapping point area is greater than or equal to 5 seconds and less than 10 seconds
  • a function instruction indicating that the current mapping point is clicked is generated; when the mapping point has a duration of staying in the mapping point area of more than 10 seconds and less than 50 seconds, the generation is generated. Indicates that the function command at the current map point is long pressed.
  • the duration of the mapping point in the mapping point area is in the third setting range, if the duration of the mapping point in the mapping point area is less than 5 seconds, it may be determined that the mapping point occurs.
  • Large movement in this case, an instruction to indicate a sliding operation is generated; in actual implementation, when the line of sight moves, the terminal continuously collects the coordinates of the mapped point; the time taken by the change and change of the position of the coordinate point , analyze the moving direction and moving speed of the mapping point.
  • a two-dimensional Cartesian coordinate system can be established on the plane of the terminal display screen, and the two-dimensional Cartesian coordinate system is recorded as a screen coordinate system; the X-axis of the screen coordinate system is set to the lateral direction of the terminal display screen, The Y axis of the screen coordinate system is set to the longitudinal direction of the terminal display screen; in addition, the spatial position of the mapped point can also be represented by the coordinates of the screen coordinate system.
  • the X-axis of the screen coordinate system is in the horizontal direction to the right, and the Y-axis in the screen coordinate system is in the vertical direction.
  • the lateral moving distance of the mapping point is calculated, and then the time taken according to the change of the X value of the coordinate of the mapping point in the screen coordinate system is calculated.
  • Horizontal shift of the mapped point Movement rate here, the lateral movement direction is leftward or rightward; for example, the X-axis positive direction of the screen coordinate system is horizontal to the right direction; if the mapping point is in the screen coordinate system, the X value is within 1.5 seconds.
  • the lateral movement rate of the mapping point is 20 per second, and the lateral movement direction is to the right; in 1.5 seconds, if the X value of the coordinates of the mapping point in the screen coordinate system is reduced from 15 to 0 , the lateral movement rate of the mapping point is 10 seconds, and the lateral movement direction is leftward.
  • the longitudinal movement distance of the mapping point can be calculated according to the change of the Y value of the coordinates of the mapping point in the screen coordinate system, and then calculated according to the time taken by the change of the Y value of the coordinate of the mapping point in the screen coordinate system.
  • the longitudinal movement rate of the mapped point here, the longitudinal movement direction is upward or downward; for example, the Y-axis of the screen coordinate system is in the vertical upward direction; if the mapping point is in the screen coordinate system within 1 second, The Y value increases from 0 to 18, the longitudinal movement rate of the mapping point is 18 per second, and the longitudinal movement direction is upward; in 1 second, if the Y value of the coordinate of the mapping point in the screen coordinate system is decreased by 0. To -10, the vertical movement rate of the mapped point is 10 per second, and the longitudinal movement direction is downward.
  • an instruction indicating that the vertical sliding operation is performed is generated, that is, an instruction indicating that the upper sliding screen operation is performed is generated; and the direction of the vertical sliding screen operation is Up or down; for example, the vertical movement rate of the mapping point is b1, the lateral movement rate of the mapping point is b2, and when b1 is greater than b2, an instruction is generated to indicate an upward sliding operation, or an indication is generated to perform a sliding screen operation. instruction.
  • the lateral movement rate of the mapping point is greater than the longitudinal movement rate of the mapping point, generating an instruction indicating that the horizontal sliding operation is performed, that is, generating an instruction indicating that the left and right sliding operation is performed; and the direction of the horizontal sliding operation is leftward or rightward;
  • the lateral movement rate of the mapping point is b3, the longitudinal movement rate of the mapping point is b4, and when b3 is greater than b4, an instruction to perform a left-slide operation or an instruction to perform a right-slide operation is generated.
  • any function instruction may not be generated, and an instruction indicating a vertical sliding operation or an instruction indicating a horizontal sliding operation may be generated.
  • the processor of the terminal may be used to record the starting point of the mapping point change and the coordinate of the ending point in the screen coordinate system; the starting point of the mapping point change to the longitudinal direction of the ending point may be set as a sliding operation. Orientation, the direction of the change from the endpoint of the mapping point change to the starting point is also set to the direction in which the sliding operation is performed; for example, the starting point of the mapping point change (the initial position of the mapping point) is in the screen coordinate system.
  • the Y value of the coordinate is recorded as c1
  • the Y value of the coordinate of the endpoint of the mapping point change in the screen coordinate system is recorded as c2.
  • the longitudinal change direction of the starting point of the mapping point change to the end point is: In the upward direction, the direction in which the mapping point changes from the end point to the starting point is: the downward direction; at this time, the upward direction or the downward direction can be set to the direction in which the sliding operation is performed.
  • the direction of the change from the starting point of the mapping point change to the end point of the mapping point may be set to the direction in which the sliding operation is performed, or the direction of the lateral change of the changing point of the mapping point to the starting point may be set as the sliding operation.
  • the X value of the coordinates of the starting point of the mapping point change in the screen coordinate system is recorded as d1
  • the X value of the coordinate of the end point of the mapping point change in the screen coordinate system is recorded as d2
  • the direction of the horizontal change from the starting point of the mapping point change to the end point is: the direction to the right
  • the direction of the lateral change from the end point of the mapping point change to the starting point is: the direction to the left
  • the rightward direction can be The direction or the left direction is set to the direction in which the slide operation is performed.
  • the instruction indicating the horizontal sliding operation is not directly generated, but the determination of whether the lateral movement rate of the mapping point satisfies the first Setting a condition; if the lateral movement rate of the mapping point satisfies the first setting condition, generating an instruction indicating that the horizontal sliding screen is performed; If the lateral movement rate of the map point does not satisfy the first set condition, no instruction is generated.
  • the first setting condition may be that the lateral movement rate of the mapping point is within the fourth setting range, and the fourth setting range may be greater than v1, or may be less than v2, or may be between v3 and v4.
  • V3 is not equal to v4, and v1, v2, v3, and v4 can all be set by the user of the terminal, that is, v1, v2, v3, and v4 can all be set rate values.
  • the instruction indicating the vertical sliding operation is not directly generated, but the determination of whether the longitudinal movement rate of the mapping point satisfies the second setting condition is continued. If the longitudinal movement rate of the map point satisfies the second set condition, an instruction is instructed to perform the vertical slide screen; if the longitudinal change rate of the map point does not satisfy the first set condition, no instruction is generated.
  • the second setting condition may be that the longitudinal movement rate of the mapping point is within the fifth setting range, and the fifth setting range may be greater than v5, or may be less than v6, or may be between v7 and v8.
  • V7 is not equal to v8, v5, v6, v7 and v8 can be set by the user of the terminal, that is, v5, v6, v7 and v8 can all be set rate values.
  • the second mode before the function instruction at the mapping point is generated, continuously performing image acquisition on the action of the user of the terminal to obtain an action image of the user; performing image recognition on the action image of the user to obtain a recognition result Generating a function instruction at the mapping point based on the recognition result.
  • the image capture device of the terminal may be used to collect the motion image of the user, and then the processor of the terminal is used to identify the motion image of the user, and the function instruction at the mapping point is generated based on the recognition result; for example, before using The camera captures the image changes of the user's head, and then collects the user's motion image.
  • the recognition result when the recognition result is a blinking action, a mouth opening motion or a mouth closing motion, an instruction indicating that the click mapping point is generated is generated; and when the recognition result is a nodding motion, generating an instruction to perform the sliding screen operation And an instruction to generate an instruction to perform an upward sliding operation when the recognition result is a head-up operation; and an instruction to perform a left-slide operation or a right-slide operation when the recognition result is a left-right shaking action.
  • the processor of the terminal may be used to generate a function instruction of the mapping point, and then the terminal automatically implements operation on the display screen of the terminal based on the function instruction, that is, the terminal may automatically implement the terminal based on the function instruction.
  • the operation of the display does not require the user to touch the display.
  • the operation of the terminal display screen corresponds to the function instruction; exemplarily, when the function instruction at the mapping point indicates that the instruction at the current mapping point is clicked, the click operation on the mapping point can be implemented based on the instruction; When the function instruction at the point is long pressing the instruction at the current mapping point, the long pressing operation on the mapping point can be implemented based on the instruction; when the function instruction at the mapping point is an instruction indicating the sliding operation, the instruction can be based on the instruction Achieve sliding operation.
  • the operation effect can be displayed on the display screen of the terminal.
  • the click operation on the mapping point the effect of opening the menu and exiting the menu can be achieved, and when the mapping point is realized, When the operation is pressed, the effect of long-pressing the menu can be realized; when the sliding operation is implemented, the effect of turning the page can be realized.
  • the mapping point on the display screen of the terminal can be obtained by analyzing the image of the reference object, and then clicking, long pressing, and sliding screen are completed based on the function instruction at the mapping point. Waiting for operation, realize non-touch screen operation terminal; do not need to touch the screen with a finger, and can operate the terminal only according to the reference object image; There is a technical problem that the finger operation terminal is inconvenient due to the size of the terminal display screen.
  • the disclosure can effectively improve the efficiency of human-computer interaction, improve the operability of the terminal, and improve the user experience.
  • FIG. 4 is a flowchart of an implementation manner of a method for operating a terminal according to an embodiment of the present disclosure. As shown in FIG. 4, the process includes:
  • Step 401 It is detected whether the positioning function is turned on by the terminal. If the positioning function is not enabled, the process is directly ended. At this time, the terminal does not respond and does not generate a function instruction; if the terminal opens the positioning function, then the process goes to step 402.
  • Step 402 Detect whether there is an object above the display screen of the terminal. If there is no object above the terminal, the process ends directly. At this time, the terminal does not respond and does not generate a function instruction; if there is an object above the display screen of the terminal, then the process goes to step 403.
  • the distance detecting device and the distance sensing device may be used to determine whether there is an object above the display screen of the terminal within the sensing range of the distance detecting device and the sensing range of the distance sensing device.
  • Step 403 Detect whether a sensing space range is set on the terminal. If not, the terminal calculates a coordinate position of a mapping point of the reference object on the terminal screen, and the terminal according to the coordinate position of the mapping point, and the moving direction and movement of the mapping point. Rate, complete the command operation of clicking, long press, and sliding screen; if the sensing space range is set on the terminal, skip to step 404.
  • the sensing space range is the same as the setting interval of the display distance in the above embodiment.
  • Step 404 When the object is in the sensing space range, the terminal calculates the coordinate position of the reference point of the reference object on the terminal screen, and jumps to step 405.
  • the terminal does not respond.
  • Step 405 The terminal completes the command operation of clicking, long pressing, and sliding screen according to the coordinate position of the mapping point and the moving direction and moving speed of the mapping point.
  • two cameras are disposed on the surface of the display screen of the terminal, and the two cameras are respectively labeled as the camera A3 and the camera B3, and the spatial positions of the camera A3 and the camera B3 can be represented by three-dimensional space coordinates, wherein The three-dimensional space coordinates of the camera A3 are (x a3 , y a3 , z a3 ), and the three-dimensional space coordinates of the camera B3 are (x b3 , y b3 , z b3 ); in actual implementation, the coordinates of a point on the terminal may be preset. Then, according to the positional relationship between the point and each camera, the three-dimensional space coordinates of each camera are determined.
  • FIG. 5 is a schematic view of a fovea and a pupil of the eye in the embodiment of the present disclosure, as shown in FIG. 5, the fovea of the eye is marked as E1, the center point of the pupil Marked as E2, the fovea of the eye is the clearest image of the human eye; the line passing through the fovea E1 of the eye and the center point E2 of the pupil may be a main line of sight of the human eye, and the main line of sight may be recorded as the line of sight L.
  • the spatial position of the fovea of the eye and the center point of the pupil can be obtained, and the three-dimensional space coordinates of the spatial position of the acquired fovea E1 are (x1, y1, z1), and the spatial position of the acquired pupil center point E2 is three-dimensional.
  • the space coordinates are (x2, y2, z2).
  • the above two cameras can be used to separately collect the image of the concave center of the eye, or the two cameras can be used to separately collect the image of the pupil center point; here, the image of the eye can be collected, and then the image recognition or image matching technology is utilized. An image of the pupil of the eye is obtained, and finally an image of the pupil center point is determined based on the image of the pupil of the eye.
  • the spatial position of the fovea of the eye is determined; and the center point of the pupil collected by the two cameras may also be The image and the spatial position of the two cameras above determine the spatial position of the pupil center point.
  • FIG. 6 is a schematic diagram of the principle of determining the position of a space point by using a dual camera according to an embodiment of the present disclosure.
  • the two cameras are respectively recorded as O l and O r , and the focal lengths of the two cameras are f, and the two cameras are The distance between the two is T.
  • the XYZ three-dimensional Cartesian coordinate system is established with the position of a camera as the origin, wherein the X-axis direction is the continuous direction of the two cameras, the Y-axis is perpendicular to the X-axis, and the Z-axis direction is associated with each camera.
  • the main optical axis (Principal Ray) direction is parallel; draw a left imaging plane perpendicular to the main optical axis of the camera O l , the vertical distance from the camera optical center to the left imaging plane is the focal length f, and the left imaging plane is established in the left imaging plane two coordinate axes, the left imaging plane coordinate system are x l y l-axis and the axes, x l axis parallel to X axis, y l and Y axes parallel; Similarly, the camera main draw of O r
  • the right imaging plane perpendicular to the optical axis, the vertical distance from the optical center of the camera to the right imaging plane is the focal length f, and the right imaging plane coordinate system is established in the right imaging plane, and the two coordinate axes of the right imaging plane coordinate system are respectively x l Axis and y l axis,
  • the x l axis is parallel to the X axis
  • the intersection of the main optical axis camera O l to the left imaging plane in the left imaging plane coordinate system is expressed as (c x1, c y1), the intersection of the main optical axis of the camera O r of the right image plane of the right imaging plane
  • the coordinate system is expressed as (c x2 , c y2 );
  • the intersection of the point P in the space with the optical center of the camera O l and the left imaging plane is denoted as P l
  • the intersection of the right imaging plane is denoted as P r ; in actual implementation, according to the focal length f of the two cameras, the distance T between the two cameras, the point P l in the coordinates of the left imaging plane coordinate system, the point P r is on the right
  • the coordinates of the plane coordinate system are imaged, and the coordinates of the point P in the above XYZ three-dimensional Cartesian coordinate system are obtained based on the binocular stereo vision
  • the intersection of the line passing through the center of the eye and the center point of the pupil can be determined based on the spatial position of the fovea of the eye and the center point of the pupil;
  • FIG. 7 is a schematic diagram showing the positional relationship between the human eye line of sight and the terminal display screen according to the embodiment of the present disclosure.
  • the eye center concave E1 and the pupil center point are determined based on the positions of the eye center concave E1 and the pupil center point E2.
  • the position of the terminal display screen can be determined according to the spatial position of each camera on the terminal and the relative positional relationship between each camera and the display screen of the terminal.
  • the intersection point O of the line of sight L and the terminal display screen is a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object; optionally, in the present disclosure, the intersection of the line of sight L and the terminal display screen is determined. After the position of O, an indication point can also be displayed at the position of the intersection of the terminal display screen. Thus, it is convenient to visually display the mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
  • the generated function instruction at the mapping point may be used to: indicate Click on the map point, indicate long press at the map point or indicate to perform a sliding screen operation; the slide screen operation can be up, down, left or right.
  • an implementation manner of generating a function instruction at the mapping point, and an implementation manner of implementing an operation on a terminal display screen based on the function instruction have been described in the foregoing embodiment of the present disclosure, and Let me repeat.
  • the reference object may be one eye of a person; a camera (front camera) is mounted on the side of the terminal display screen, and the image in the human eye is collected by using the front camera, that is, the object is collected in the human eye. Imaging; alternatively, in the present disclosure, the sharpest point of the image in the collected human eye can also be determined, where the center point of the image in the human eye can be determined as the sharpest point.
  • An area in the current display content of the terminal display that matches an image or a reference image in the collected human eye is recorded as: a screen matching area; the reference image is an area in the image of the collected human eye that includes the sharpest point.
  • the image for example, the reference image may be an image of a region having a radius of 1 cm at the center of the sharpest point.
  • a point is determined in the screen matching area as a mapping point on the terminal display screen to form a preset mapping relationship with the reference object. For example, a point is determined as a point of the above mapping in a circular area centered on the center point of the screen matching area and having a radius of 0.5 cm.
  • the coordinates of the above mapping point in the screen coordinate system may also be determined.
  • the reference object may be two eyes of the person; two cameras are installed on the surface of the terminal display screen, and the two cameras are respectively recorded as the camera A5 and the camera B5, and the coordinates of the camera A5 in the three-dimensional coordinate system.
  • the coordinates of the camera B5 in the three-dimensional coordinate system are (x b5 , y b5 , z b5 ); alternatively, in the present disclosure, the camera A5 and the camera B5 can be configured in the same In a plane, the plane formed by the two cameras is parallel or coincident with the terminal screen.
  • Each camera can separately capture images of two eyes, and then use image recognition or image matching technology to obtain an image of the pupil of each eye, and finally determine an image of the pupil center point of each eye according to the image of the pupil of each eye;
  • the pupil center point of each eye is the reference point.
  • the two images of the pupil center points of the two eyes collected by each camera and the spatial positions of the camera A5 and the camera B5 can be determined.
  • the spatial position of the pupil's center point of the eye; here, the pupil center points of the two eyes can be recorded as C5 and D5, respectively; obviously, after determining the spatial position of the pupil center point of both eyes, each can be drawn The distance from the camera to point C5 and the distance from each camera to point D5.
  • the processor of the terminal can determine the pupil center point of the two eyes on the terminal display according to the spatial position of the pupil center point of the two eyes and the position of the terminal display screen.
  • the position of the terminal display screen can be determined according to the spatial position of the camera A5 and the camera B5 on the terminal, and the relative positional relationship between the camera A5 and the camera B5 and the terminal display screen.
  • the midpoint of the line connecting the projection center points of the two eyes on the terminal display screen can be determined, and the determined connection is determined.
  • the midpoint is a mapping point on the terminal display that forms a preset mapping relationship with the reference object.
  • the midpoint O5 of the line connecting E5 and F5 can be determined according to the coordinates of the E5 point and the F5 point in the screen coordinate system.
  • the O5 point is a preset on the terminal display screen and the reference object.
  • the mapping point of the mapping relationship, the sitting point of the O5 point in the screen coordinate system is (X O5 , Y O5 ).
  • a front camera and a rear camera are installed on the terminal, the front camera is used to collect the image of the reference object A in real time, and the rear camera is used to collect the image of the reference object B in real time;
  • the spatial position of the reference point may be determined based on the image of the reference point, the spatial position of the front camera, and the distance between the front camera and the reference object A; similarly, the reference object B is selected.
  • the spatial position of the reference point can be determined based on the image of the reference point, the spatial position of the rear camera, and the distance between the rear camera and the reference object B.
  • the reference point selected from the reference object A is represented as the point A6
  • the reference point selected from the reference object B is represented as the point B6
  • the coordinates of the spatial position of the point A6 can be recorded as (X A6 , Y A6 , Z A6 )
  • the coordinates of the spatial position of the point B6 can be written as (X B6 , Y B6 , Z B6 ).
  • the terminal can be provided with a front camera and a rear camera switch for selecting the front camera or the rear camera.
  • the user can determine the relative position relationship between the reference object and the terminal according to the need to determine the camera before starting. , or after starting the camera.
  • an embodiment of the present disclosure also proposes a terminal.
  • FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure. As shown in FIG. 8, the terminal 800 includes: an image collection device 801 and a processor 802.
  • the image capturing device 801 is configured to collect an image of the reference object in real time, and the distance between the reference object and the display screen of the terminal exceeds a set value;
  • the processor 802 is configured to analyze an image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, and generate a function instruction at the mapping point, based on the The function instructions implement the operation of the terminal display.
  • the reference object includes a pupil of a person
  • the processor 802 is further configured to acquire a spatial position of the fovea of the eye in real time; the pupil and the fovea of the eye are located in the same eye;
  • the processor 802 is configured to obtain a spatial position of the pupil center point based on the image of the pupil; based on the spatial position of the eye fovea and the pupil center point, passing through the center fovea and the pupil center point The intersection of the straight line and the terminal display screen is determined as a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
  • the image capture device 801 includes at least one camera
  • the processor 802 is configured to select at least one point in the image of the reference object as a reference point; determine a spatial position of each reference point based on an image of each reference point and a spatial position of each camera; Determining a projection point of each reference point on the terminal display screen according to a spatial position of the reference point and a spatial position of the terminal display screen; and determining a mapping point that forms a preset mapping relationship with the reference object based on the projection point.
  • the reference object includes one eye of a person
  • the image capturing device 801 is configured to collect images in a human eye in real time;
  • the processor 802 is configured to determine, in a current display content of the terminal display screen, an area that matches an image in the collected human eye as: a screen matching area; and select a point in the screen matching area as the reference object A mapping point that forms a preset mapping relationship.
  • the processor 802 is further configured to determine a distance between the terminal display screen and the reference object before analyzing the image of the reference object;
  • the processor 802 is configured to analyze an image of the reference object when the determined distance is in a set interval, and obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object; When it is determined that the acquired distance is not within the set interval, the flow is directly ended.
  • the processor 802 is configured to determine a time when the mapping point is in a setting area; and generate a function instruction at the mapping point based on the determined size of the time.
  • the image collection device 801 is further configured to continuously perform image collection on the action of the user of the terminal before generating the function instruction at the mapping point, to obtain an action image of the user;
  • the processor 802 is further configured to perform image recognition on the motion image of the user to obtain a recognition result, and generate a function instruction at the mapping point based on the recognition result.
  • non-transitory computer readable storage medium comprising instructions, for example comprising The memory of the instructions, which may be executed by a processor of the terminal to perform the above method.
  • the non-transitory computer readable storage medium described above may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • embodiments of the present disclosure can be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware aspects. Moreover, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the present disclosure is applicable to the field of human-computer interaction, and can operate the terminal only according to the reference object image without touching the screen with a finger; it is inconvenient for the finger operation terminal that is currently caused by the size of the terminal display screen becoming large.
  • the technical problem is that the invention can effectively improve the efficiency of human-computer interaction, improve the operability of the terminal, and improve the user experience.

Abstract

Provided in the embodiments of the present disclosure is a method for operating a terminal, the method comprising: collecting an image of a reference object in real time, the distance between the reference object and a terminal display screen exceeding a set value; analyzing the image of the reference object, to obtain, on the terminal display screen, a mapping point forming a preset mapping relationship with the reference object; generating a function instruction at the mapping point, and operating the terminal display screen on the basis of the function instruction. Further provided in the embodiments of the present disclosure is a terminal.

Description

一种操作终端的方法和终端Method and terminal for operating terminal 技术领域Technical field
本公开涉及人机交互领域,尤其涉及一种操作终端的方法和终端。The present disclosure relates to the field of human-computer interaction, and in particular, to a method and a terminal for operating a terminal.
背景技术Background technique
目前,具有触屏的终端的应用范围变得越来越广泛;终端上的触摸屏可以作为一种实现人机交互的设备,示例性地,电阻式触摸屏实际上是一种传感器,其结构基本上是薄膜与玻璃叠加的结构,薄膜和玻璃相邻的一面上均涂有纳米铟锡金属氧化物(ITO,Indium Tin Oxide)涂层,ITO具有很好的导电性和透明性。当产生触摸操作时,薄膜下层的ITO会接触到玻璃上层的ITO,经由感应器传出相应的电信号,经过转换电路送到处理器,通过运算转化为屏幕上的坐标值(X、Y值),而完成点选的动作,并呈现在屏幕上。At present, the application range of a terminal with a touch screen becomes wider and wider; the touch screen on the terminal can be used as a device for realizing human-computer interaction. Illustratively, the resistive touch screen is actually a sensor, and its structure is basically It is a superimposed structure of film and glass. The adjacent side of the film and glass is coated with nano-indium tin oxide (ITO) coating. ITO has good conductivity and transparency. When a touch operation occurs, the ITO under the film contacts the ITO on the upper layer of the glass, and the corresponding electrical signal is transmitted through the inductor, sent to the processor through the conversion circuit, and converted into coordinate values on the screen (X, Y values). ), and complete the selected action and present it on the screen.
现有技术中,公开了一种可穿戴设备的触控响应方法、装置及可穿戴设备,以使可穿戴设备能够实时向用户反馈触控操作效果,提高可穿戴设备的触控精准性;其技术方案为:获取双目识别设备所采集的目标指尖在设定的触控动作发生区域的位置信息;根据目标指尖的位置信息,确定目标指尖映射到可穿戴设备屏幕上的映射点的位置信息;在可穿戴设备屏幕上的所述映射点显示光标。In the prior art, a touch response method, a device, and a wearable device of a wearable device are disclosed, so that the wearable device can feedback the touch operation effect to the user in real time, and improve the touch accuracy of the wearable device; The technical solution is: acquiring position information of the target fingertip collected by the binocular recognition device in the set touch action occurrence area; determining the mapping point of the target fingertip on the screen of the wearable device according to the position information of the target fingertip Location information; the cursor is displayed at the mapped point on the wearable device screen.
发明内容Summary of the invention
为解决现有存在的技术问题,本公开实施例提供一种操作终端的方法和终端,在不需要用手指触摸屏幕的情况下就能实现操作终端。In order to solve the existing technical problems, embodiments of the present disclosure provide a method and a terminal for operating a terminal, which can implement an operation terminal without touching a screen with a finger.
为达到上述目的,本公开实施例的技术方案是这样实现的:In order to achieve the above object, the technical solution of the embodiment of the present disclosure is implemented as follows:
本公开实施例提供了一种操作终端的方法,所述方法包括:An embodiment of the present disclosure provides a method for operating a terminal, where the method includes:
实时采集参照物的图像,所述参照物与终端显示屏的距离超过设定值;Acquiring an image of the reference object in real time, the distance between the reference object and the display screen of the terminal exceeds a set value;
对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点;Performing an analysis on the image of the reference object to obtain a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object;
生成所述映射点处的功能指令,基于所述功能指令实现对终端显示屏的操作。Generating a function instruction at the mapping point, and implementing an operation on a terminal display screen based on the function instruction.
可选地,在本公开中,所述参照物包括人的瞳孔;Optionally, in the present disclosure, the reference object includes a pupil of a person;
相应地,所述对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点,包括:Correspondingly, the image of the reference object is analyzed to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including:
实时获取眼睛中心凹的空间位置,所述瞳孔和所述眼睛中心凹位于同一只眼睛;基于所述瞳孔的图像,得出瞳孔中心点的空间位置;Obtaining a spatial position of the fovea of the eye in real time, the pupil and the fovea of the eye being located in the same eye; and based on the image of the pupil, obtaining a spatial position of the central point of the pupil;
基于所述眼睛中心凹和瞳孔中心点的空间位置,将经过眼睛中心凹和瞳孔中心点的直线与终端显示屏的交点确定为终端显示屏上与所述参照物形成预设的映射关系的映射点。Based on the spatial position of the fovea of the eye and the center point of the pupil, the intersection of the line passing through the center point of the eye and the center point of the pupil is determined as a mapping relationship between the terminal display and the reference object. point.
可选地,在本公开中,所述实时采集参照物的图像,包括:利用双摄像头分别采集所述 瞳孔的图像;Optionally, in the disclosure, the acquiring an image of the reference object in real time includes: separately acquiring the An image of the pupil;
所述基于所述瞳孔的图像,得出瞳孔中心点的空间位置,包括:基于所述两个摄像头的空间位置、以及所述两个摄像头所采集的图像,得出瞳孔中心点的空间位置。The obtaining a spatial position of the pupil center point based on the image of the pupil includes: obtaining a spatial position of the pupil center point based on a spatial position of the two cameras and an image acquired by the two cameras.
可选地,在本公开中,所述实时采集参照物的图像,包括:利用至少一个摄像头实时采集所述参照物的图像;Optionally, in the present disclosure, the acquiring an image of the reference object in real time includes: acquiring an image of the reference object in real time by using at least one camera;
相应地,所述对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点,包括:Correspondingly, the image of the reference object is analyzed to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including:
在所述参照物的图像中选取至少一点作为参照点;Selecting at least one point in the image of the reference object as a reference point;
基于每个参照点的图像和每个摄像头的空间位置,确定每个参照点的空间位置;Determining the spatial position of each reference point based on the image of each reference point and the spatial position of each camera;
基于每个参照点的空间位置和所述终端显示屏的空间位置,确定每个参照点在终端显示屏上的投影点;Determining a projection point of each reference point on the terminal display screen based on a spatial position of each reference point and a spatial position of the terminal display screen;
基于所述投影点确定与所述参照物形成预设的映射关系的映射点。A mapping point that forms a preset mapping relationship with the reference object is determined based on the projection point.
可选地,在本公开中,所述基于所述投影点确定与所述参照物形成预设的映射关系的映射点,包括:将所述投影点作为与所述参照物形成预设的映射关系的映射点。Optionally, in the disclosure, the determining, according to the projection point, a mapping point that forms a preset mapping relationship with the reference object, includes: forming the projection point as a preset mapping with the reference object The mapping point of the relationship.
可选地,在本公开中,所述参照点的个数为2个;Optionally, in the present disclosure, the number of the reference points is two;
所述基于所述投影点确定与所述参照物形成预设的映射关系的映射点,包括:确定两个参照点在终端显示屏上的投影点,将确定的两个投影点的连线的中点作为与所述参照物形成预设的映射关系的映射点。Determining, according to the projection point, a mapping point that forms a preset mapping relationship with the reference object, comprising: determining a projection point of the two reference points on the display screen of the terminal, and determining the connection of the two projection points The midpoint serves as a mapping point that forms a preset mapping relationship with the reference object.
可选地,在本公开中,所述参照物包括人的两只眼睛;Optionally, in the present disclosure, the reference object includes two eyes of a person;
所述在所述参照物的图像中选取至少一点作为参照点,包括:将所述人的两只眼睛的瞳孔中心点分别作为参照点。Selecting at least one point in the image of the reference object as a reference point includes: using a pupil center point of the two eyes of the person as a reference point.
可选地,在本公开中,所述在所述参照物的图像中选取至少一点作为参照点,包括:Optionally, in the disclosure, selecting at least one point in the image of the reference object as a reference point includes:
基于所述参照物的图像和每个摄像头的空间位置,确定所述参照物的空间位置;Determining a spatial location of the reference object based on an image of the reference object and a spatial location of each camera;
基于所述参照物的空间位置,将所述参照物中与终端显示屏垂直距离最小的一点作为参照点。Based on the spatial position of the reference object, a point at which the vertical distance from the terminal display screen in the reference object is the smallest is used as a reference point.
可选地,在本公开中,所述实时采集参照物的图像,包括:利用一个摄像头实时采集所述参照物的图像;Optionally, in the present disclosure, the acquiring an image of the reference object in real time includes: acquiring an image of the reference object in real time by using a camera;
在对所述参照物的图像进行分析前,所述方法还包括:实时获取所述摄像头与所述参照物的距离;Before analyzing the image of the reference object, the method further includes: acquiring a distance between the camera and the reference object in real time;
相应地,所述基于所述参照点的图像和每个摄像头的空间位置,确定所述参照点的空间位置,包括:Correspondingly, determining the spatial location of the reference point based on the image of the reference point and the spatial position of each camera comprises:
基于所述参照点的图像、所述摄像头的空间位置以及所述摄像头与所述参照物的距离,确定所述参照点的空间位置。A spatial position of the reference point is determined based on an image of the reference point, a spatial position of the camera, and a distance between the camera and the reference object.
可选地,在本公开中,所述参照物为与所述终端显示屏的垂直距离最小的物体,或者,所述参照物位于人体。Optionally, in the present disclosure, the reference object is an object having a minimum vertical distance from the display screen of the terminal, or the reference object is located in a human body.
可选地,在本公开中,所述参照物包括人的一只眼睛;所述实时采集参照物的图像,包 括:实时采集人眼睛中的图像;Optionally, in the present disclosure, the reference object includes one eye of a person; the real-time image of the reference object is collected, Including: collecting images in the human eye in real time;
所述对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点,包括:将终端显示屏当前显示内容中与所采集的人眼睛中的图像匹配的区域确定为:屏幕匹配区域;在所述屏幕匹配区域内选取一点作为与所述参照物形成预设的映射关系的映射点。The analyzing the image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including: displaying the current display content of the terminal display screen and the collected human eyes The image matching area is determined as: a screen matching area; a point is selected in the screen matching area as a mapping point that forms a preset mapping relationship with the reference object.
可选地,在本公开中,在对所述参照物的图像进行分析之前,所述方法还包括:确定终端显示屏与所述参照物的距离;Optionally, in the present disclosure, before analyzing the image of the reference object, the method further includes: determining a distance between the terminal display screen and the reference object;
所述对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点,包括:所确定的距离处于设定区间时,对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点。The analyzing the image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including: when the determined distance is in the set interval, the reference object The image is analyzed to obtain a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
可选地,在本公开中,所述生成所述映射点处的功能指令包括:确定所述映射点处于终端显示屏上映射点区域的时间;基于所确定的时间的大小,生成所述映射点处的功能指令;所述映射点区域包括所述终端显示屏上的映射点的初始位置。Optionally, in the disclosure, the generating the function instruction at the mapping point comprises: determining that the mapping point is in a mapping point area on a terminal display screen; generating the mapping based on the determined size of time a function instruction at a point; the mapping point area includes an initial position of a mapping point on the display screen of the terminal.
可选地,在本公开中,所述基于所确定的时间的大小,生成映射点处的功能指令,包括:Optionally, in the present disclosure, the generating, according to the determined size of the time, the function instruction at the mapping point, including:
所确定的时间处于第一设定范围时,生成指示点击当前映射点处的指令;所确定的时间处于第二设定范围时,生成指示长按当前映射点处的指令;所确定的时间处于第三设定范围时,生成指示进行滑屏操作的指令;When the determined time is in the first setting range, generating an instruction indicating that the current mapping point is clicked; when the determined time is in the second setting range, generating an instruction indicating that the current mapping point is long pressed; the determined time is at When the third setting range is set, an instruction indicating that the sliding operation is performed is generated;
所述第一设定范围、第二设定范围和第三设定范围两两之间不形成重叠。No overlap is formed between the first setting range, the second setting range, and the third setting range.
可选地,在本公开中,所述第一设定范围是从第一时间阈值到第二时间阈值的范围;所述第二设定范围为大于第二时间阈值;所述第三设定范围为小于第一时间阈值;所述第一时间阈值小于第二时间阈值。Optionally, in the present disclosure, the first setting range is a range from a first time threshold to a second time threshold; the second setting range is greater than a second time threshold; the third setting The range is less than the first time threshold; the first time threshold is less than the second time threshold.
可选地,在本公开中,所述生成指示进行滑屏操作的指令,包括:获取映射点的移动方向和移动速率,基于所述映射点的移动方向和移动速率,生成指示进行滑屏操作的指令。Optionally, in the disclosure, the generating an instruction to perform a sliding operation includes: acquiring a moving direction and a moving rate of the mapping point, and generating an indication to perform a sliding operation based on the moving direction and the moving speed of the mapping point. Instructions.
可选地,在本公开中,所述基于所述映射点的移动方向和移动速率,生成指示进行滑屏操作的指令包括:Optionally, in the disclosure, the generating, according to the moving direction and the moving rate of the mapping point, generating an instruction to perform a sliding screen operation includes:
将映射点在移动终端显示屏的横向方向的移动速率作为映射点的横向移动速率,将映射点在移动终端显示屏的纵向方向的移动速率作为映射点的纵向移动速率;The moving rate of the mapping point in the lateral direction of the display screen of the mobile terminal is used as the lateral moving rate of the mapping point, and the moving rate of the mapping point in the longitudinal direction of the display screen of the mobile terminal is taken as the longitudinal moving speed of the mapping point;
所述映射点的横向移动速率大于映射点的纵向移动速率时,生成指示进行横向滑屏操作的指令;或者,所述映射点的横向移动速率大于映射点的纵向移动速率,且映射点的横向移动速率满足第一设定条件时,生成指示进行横向滑屏操作的指令;When the lateral movement rate of the mapping point is greater than the longitudinal movement rate of the mapping point, generating an instruction indicating that the horizontal sliding operation is performed; or, the lateral movement rate of the mapping point is greater than the longitudinal moving speed of the mapping point, and the horizontal direction of the mapping point When the moving rate satisfies the first setting condition, generating an instruction indicating that the horizontal sliding operation is performed;
所述映射点的纵向移动速率大于映射点的横向移动速率时,生成指示进行纵向滑屏操作的指令;或者,所述映射点的纵向移动速率大于映射点的横向移动速率,且映射点的纵向移动速率满足第二设定条件时,生成指示进行纵向滑屏操作的指令。When the longitudinal movement rate of the mapping point is greater than the lateral movement rate of the mapping point, generating an instruction indicating that the vertical sliding operation is performed; or, the longitudinal movement rate of the mapping point is greater than the lateral movement rate of the mapping point, and the vertical direction of the mapping point When the movement rate satisfies the second setting condition, an instruction is instructed to perform the vertical sliding operation.
可选地,在本公开中,所述第一设定条件为:映射点的横向移动速率在第四设定范围内;所述第二设定条件为:映射点的纵向移动速率在第五设定范围内。Optionally, in the present disclosure, the first setting condition is: a lateral movement rate of the mapping point is within a fourth setting range; and the second setting condition is: a longitudinal movement rate of the mapping point is at a fifth Within the setting range.
可选地,在本公开中,所述映射点区域包括所述终端显示屏上的映射点的初始位置;所 述映射点区域的面积小于等于设定门限。Optionally, in the disclosure, the mapping point area includes an initial position of a mapping point on the display screen of the terminal; The area of the mapping point area is less than or equal to the set threshold.
可选地,在本公开中,所述映射点区域是以映射点的初始位置为圆心的一个圆形区域。Optionally, in the present disclosure, the mapping point area is a circular area centered on the initial position of the mapping point.
可选地,在本公开中,在生成所述映射点处的功能指令前,所述方法还包括:对所述终端的用户的动作连续进行图像采集,得到用户的动作图像;对所述用户的动作图像进行图像识别,得到识别结果;Optionally, in the disclosure, before the generating the function instruction at the mapping point, the method further includes: continuously performing image collection on the action of the user of the terminal, and obtaining an action image of the user; The motion image is image-recognized to obtain a recognition result;
相应地,所述生成所述映射点处的功能指令包括:基于所述识别结果生成所述映射点处的功能指令。Correspondingly, the generating the function instruction at the mapping point comprises: generating a function instruction at the mapping point based on the recognition result.
可选地,在本公开中,所述基于所述识别结果生成所述映射点处的功能指令,包括:所述识别结果为眨眼动作、嘴张开动作或嘴闭合动作时,生成指示点击映射点处的指令;所述识别结果为点头动作时,生成指示进行向下滑屏操作的指令;所述识别结果为抬头动作时,生成指示进行向上滑屏操作的指令;所述识别结果为左右摇头动作时,生成指示进行横向滑屏操作的指令。Optionally, in the present disclosure, the generating, according to the recognition result, the function instruction at the mapping point, when the recognition result is a blinking action, a mouth opening motion, or a mouth closing motion, generating an indication click mapping An instruction at a point; when the recognition result is a nodding action, generating an instruction to perform a downward screen operation; and when the recognition result is a head-up operation, generating an instruction to perform an upward sliding operation; the recognition result is shaking the left and right At the time of the action, an instruction is generated to instruct the lateral sliding operation.
可选地,在本公开中,所述映射点处的功能指令是:指示点击当前映射点处的功能指令、指示长按当前映射点处的功能指令或指示进行滑屏操作的指令。Optionally, in the present disclosure, the function instruction at the mapping point is: an instruction to click a function instruction at a current mapping point, a function instruction to press and hold a current mapping point, or an instruction to perform a sliding operation.
本公开实施例还提供了一种终端,包括图像采集装置和处理器;其中,An embodiment of the present disclosure further provides a terminal, including an image collection device and a processor;
图像采集装置,设置为实时采集参照物的图像,所述参照物与终端显示屏的距离超过设定值;The image capturing device is configured to collect an image of the reference object in real time, and the distance between the reference object and the display screen of the terminal exceeds a set value;
处理器,设置为对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点;生成所述映射点处的功能指令,基于所述功能指令实现对终端显示屏的操作。a processor configured to analyze an image of the reference object to obtain a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object; generate a function instruction at the mapping point, based on the function The instructions implement the operation of the terminal display.
可选地,在本公开中,所述参照物包括人的瞳孔;Optionally, in the present disclosure, the reference object includes a pupil of a person;
所述处理器,还设置为实时获取眼睛中心凹的空间位置;所述瞳孔和所述眼睛中心凹位于同一只眼睛;The processor is further configured to acquire a spatial position of the fovea of the eye in real time; the pupil and the fovea of the eye are located in the same eye;
相应地,所述处理器,设置为基于所述瞳孔的图像,得出瞳孔中心点的空间位置;基于所述眼睛中心凹和瞳孔中心点的空间位置,将经过眼睛中心凹和瞳孔中心点的直线与终端显示屏的交点确定为:终端显示屏上与所述参照物形成预设的映射关系的映射点。Correspondingly, the processor is arranged to obtain a spatial position of the pupil center point based on the image of the pupil; based on the spatial position of the eye fovea and the pupil center point, passing through the eye fovea and the pupil center point The intersection of the straight line and the terminal display screen is determined as a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
可选地,在本公开中,所述图像采集装置包括至少一个摄像头;Optionally, in the present disclosure, the image collection device includes at least one camera;
所述处理器,设置为在所述参照物的图像中选取至少一点作为参照点;基于每个参照点的图像和每个摄像头的空间位置,确定每个参照点的空间位置;基于每个参照点的空间位置和所述终端显示屏的空间位置,确定每个参照点在终端显示屏上的投影点;基于所述投影点确定与所述参照物形成预设的映射关系的映射点。The processor is configured to select at least one point in the image of the reference object as a reference point; determine a spatial position of each reference point based on an image of each reference point and a spatial position of each camera; Determining a projection point of each reference point on the terminal display screen based on the spatial position of the point and a spatial position of the terminal display screen; and determining a mapping point that forms a preset mapping relationship with the reference object based on the projection point.
可选地,在本公开中,所述参照物包括人的一只眼睛;Optionally, in the present disclosure, the reference object includes one eye of a person;
所述图像采集装置,设置为实时采集人眼睛中的图像;The image capture device is configured to collect images in a human eye in real time;
所述处理器,设置为将终端显示屏当前显示内容中与所采集的人眼睛中的图像匹配的区域确定为:屏幕匹配区域;在所述屏幕匹配区域内选取一点作为与所述参照物形成预设的映射关系的映射点。 The processor is configured to determine an area in the current display content of the terminal display screen that matches an image in the collected human eye as: a screen matching area; and select a point in the screen matching area as forming with the reference object The mapping point of the preset mapping relationship.
可选地,在本公开中,所述处理器,还设置为在对所述参照物的图像进行分析之前,确定终端显示屏与所述参照物的距离;Optionally, in the present disclosure, the processor is further configured to determine a distance between the terminal display screen and the reference object before analyzing the image of the reference object;
所述处理器,设置为在所确定的距离处于设定区间时,对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点。The processor is configured to analyze an image of the reference object when the determined distance is in a set interval, and obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object.
可选地,在本公开中,所述处理器,设置为确定所述映射点处于设置区域的时间;基于所确定的时间的大小,生成所述映射点处的功能指令。Optionally, in the present disclosure, the processor is configured to determine a time when the mapping point is in a setting area; and generate a function instruction at the mapping point based on the determined size of the time.
可选地,在本公开中,所述图像采集装置,还设置为在生成所述映射点处的功能指令前,对所述终端的用户的动作连续进行图像采集,得到用户的动作图像;Optionally, in the disclosure, the image collection device is further configured to continuously perform image collection on the action of the user of the terminal before generating the function instruction at the mapping point, to obtain an action image of the user;
所述处理器,还设置为对所述用户的动作图像进行图像识别,得到识别结果;基于所述识别结果生成所述映射点处的功能指令。The processor is further configured to perform image recognition on the motion image of the user to obtain a recognition result; and generate a function instruction at the mapping point based on the recognition result.
本公开实施例提供的一种操作终端的方法和终端,实时采集与终端显示屏没有接触的参照物的图像;对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点;生成所述映射点处的功能指令,基于所述功能指令实现对终端显示屏的操作;如此,可以通过参照物的图像进行分析,得出终端显示屏上的映射点,进而基于映射点处的功能指令,完成点击、长按、滑屏等操作,实现非触屏方式操作终端;可以确定指尖的映射点,根据指尖轨迹操作手机,并能利用映射点对手机屏幕进行直接操作;不需要用手指触摸屏幕,仅根据参照物图像就可以操作终端;对于当前存在由终端显示屏尺寸变大导致的手指操作终端不方便的技术问题,本发明能够有效提高人机交互效率,提高了终端的易操作性,并提升了用户体验。The method and the terminal for operating the terminal provided by the embodiment of the present disclosure collect an image of a reference object that is not in contact with the display screen of the terminal in real time; analyze the image of the reference object to obtain a reference image on the terminal display screen Forming a mapping point of the preset mapping relationship; generating a function instruction at the mapping point, and implementing an operation on the display screen of the terminal based on the function instruction; thus, analyzing the image of the reference object to obtain a display on the terminal The mapping point, and then based on the function instruction at the mapping point, completes the operation of clicking, long pressing, sliding screen, etc., realizes the non-touch screen operation terminal; can determine the mapping point of the fingertip, operate the mobile phone according to the fingertip trajectory, and can utilize The mapping point directly operates on the screen of the mobile phone; the user does not need to touch the screen with a finger, and the terminal can be operated only according to the image of the reference object; the present invention can solve the technical problem that the finger operating terminal caused by the size of the terminal display screen is large Effectively improve the efficiency of human-computer interaction, improve the ease of operation of the terminal, and enhance the user experience.
附图说明DRAWINGS
图1为本公开实施例操作终端的方法的流程图;FIG. 1 is a flowchart of a method for operating a terminal according to an embodiment of the present disclosure;
图2为本公开实施例参照点在终端显示屏上的投影点的第一示意图;2 is a first schematic diagram of a projection point of a reference point on a display screen of a terminal according to an embodiment of the present disclosure;
图3为本公开实施例参照点在终端显示屏上的投影点的第二示意图;3 is a second schematic diagram of a projection point of a reference point on a display screen of a terminal according to an embodiment of the present disclosure;
图4为本公开实施例操作终端的方法的一个实现方案的流程图;4 is a flowchart of an implementation manner of a method for operating a terminal according to an embodiment of the present disclosure;
图5为本公开实施例中眼睛的中心凹和瞳孔的示意图Figure 5 is a schematic view of the fovea and pupil of the eye in the embodiment of the present disclosure
图6为本公开实施例利用双摄像头确定空间一点的位置的原理示意图;FIG. 6 is a schematic diagram of a principle for determining a position of a space point by using a dual camera according to an embodiment of the present disclosure; FIG.
图7为本公开实施例人眼视线与终端显示屏的位置关系示意图;7 is a schematic diagram showing a positional relationship between a human eye line of sight and a terminal display screen according to an embodiment of the present disclosure;
图8为本公开实施例的终端的结构示意图。FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
具体实施方式detailed description
以下结合附图及实施例,对本公开进行进一步详细说明。应当理解,此处所描述的实施例仅仅用以解释本公开,并不用于限定本公开。The present disclosure will be further described in detail below in conjunction with the accompanying drawings and embodiments. It is understood that the embodiments described herein are merely illustrative of the disclosure and are not intended to limit the disclosure.
本公开实施例记载一种操作终端的方法,可以应用于终端中,上述终端可以是具有显示屏的固定终端或移动终端;示例性地,显示屏可以不具有触摸响应功能,也可以是具有触摸响应功能的触摸屏;移动终端可以是智能手机、平板电脑或穿戴式设备(如智能眼镜、智能 手表等),还可以是智能汽车、智能家电(如智能冰箱、智能电池、机顶盒等);智能手机的操作系统可以是安卓操作系统、IOS操作系统或其他任意第三方开发的可以运行于微型计算机结构(至少包括处理器和内存)的操作系统(如移动版Linux系统、黑莓QNX操作系统等)。The embodiment of the present disclosure describes a method for operating a terminal, which may be applied to a terminal, where the terminal may be a fixed terminal or a mobile terminal having a display screen; for example, the display screen may not have a touch response function, or may have a touch Responsive touch screen; mobile terminal can be a smartphone, tablet or wearable device (such as smart glasses, smart Watches, etc.) can also be smart cars, smart home appliances (such as smart refrigerators, smart batteries, set-top boxes, etc.); the operating system of smart phones can be Android operating system, IOS operating system or any other third-party developed to run on a microcomputer Operating system (including at least processor and memory) (such as mobile Linux, BlackBerry QNX, etc.).
上述记载的终端包括图像采集装置,用于采集参照物的图像,这里的参照物可以是位于终端的物体,例如,参照物可以是人眼、鼻子等物体;图像采集装置可以包括至少一个摄像头。The terminal described above includes an image capturing device for collecting an image of a reference object, where the reference object may be an object located at the terminal, for example, the reference object may be an object such as a human eye or a nose; and the image capturing device may include at least one camera.
上述记载的终端还包括图像分析装置,用于分析采集到的参照物的图像,在实际实施时,图像分析装置可以是终端上的处理器。The terminal described above further includes an image analyzing device for analyzing the image of the collected reference object. In actual implementation, the image analyzing device may be a processor on the terminal.
基于上述记载的终端上设置的显示屏、图像采集装置和图像分析装置,提出以下各实施例。The following embodiments are proposed based on the display screen, the image pickup device, and the image analysis device provided on the terminal described above.
图1为本公开实施例操作终端的方法的流程图,如图1所示,该流程包括:FIG. 1 is a flowchart of a method for operating a terminal according to an embodiment of the present disclosure. As shown in FIG. 1 , the process includes:
步骤101:实时采集参照物的图像,所述参照物与终端显示屏的距离超过设定值。Step 101: Acquire an image of the reference object in real time, and the distance between the reference object and the display screen of the terminal exceeds a set value.
这里,设定值大于0,设定值可以根据实际应用场景进行设置;也就是说,参照物与终端显示屏没有形成接触关系。Here, the set value is greater than 0, and the set value can be set according to the actual application scenario; that is, the reference object does not form a contact relationship with the terminal display screen.
这里,参照物的种类不进行限制,例如,参照物可以包括:人的眼睛、鼻子,或者,参照物是与所述终端显示屏的垂直距离最小的物体等等。Here, the kind of the reference object is not limited. For example, the reference object may include: a person's eyes, a nose, or the reference object is an object having a minimum vertical distance from the display screen of the terminal, or the like.
可选的,还可以利用至少一个摄像头分别采集所述瞳孔的图像,例如,摄像头的个数为1或2;摄像头可以设置在终端显示屏所在一侧,也就是在终端上设置前摄像头;摄像头也可以设置在终端的背面,也就是在终端上设置后摄像头。Optionally, the image of the pupil may be separately collected by using at least one camera, for example, the number of cameras is 1 or 2; the camera may be disposed on the side of the terminal display, that is, the front camera is set on the terminal; the camera It can also be set on the back of the terminal, that is, the rear camera is set on the terminal.
步骤102:对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点。Step 102: Perform an analysis on the image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object.
下面分为三种情况对本步骤的实现方式进行说明。The following is divided into three cases to explain the implementation of this step.
第一种情况First case
参照物包括人的瞳孔;在对所述参照物的图像进行分析前,实时获取眼睛中心凹的空间位置;所述瞳孔和所述眼睛中心凹位于同一只眼睛;在步骤101中,利用两个摄像头分别采集所述瞳孔的图像。The reference includes a pupil of the person; the spatial position of the fovea of the eye is acquired in real time before the image of the reference is analyzed; the pupil and the fovea of the eye are located in the same eye; in step 101, two are utilized The camera separately captures an image of the pupil.
相应地,本步骤包括:基于所述瞳孔的图像,得出瞳孔中心点的空间位置;基于所述眼睛中心凹和瞳孔中心点的空间位置,将经过眼睛中心凹和瞳孔中心点的直线与终端显示屏的交点确定为:终端显示屏上与所述参照物形成预设的映射关系的映射点。Correspondingly, the step includes: obtaining a spatial position of the pupil center point based on the image of the pupil; and a line and a terminal passing through the center of the eye and the center point of the pupil based on the spatial position of the central fovea and the pupil center point The intersection of the display screen is determined as a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
在实际实施时,可以利用上述两个摄像头分别采集眼睛中心凹的图像,也可以利用上述两个摄像头分别采集瞳孔中心点的图像;这里,可以采集眼睛的图像,之后利用图像识别或图像匹配技术,得到眼睛的瞳孔的图像,最后根据眼睛的瞳孔的图像确定瞳孔中心点的图像。In actual implementation, the above two cameras can be used to separately collect the image of the concave center of the eye, or the two cameras can be used to separately collect the image of the pupil center point; here, the image of the eye can be collected, and then the image recognition or image matching technology is utilized. An image of the pupil of the eye is obtained, and finally an image of the pupil center point is determined based on the image of the pupil of the eye.
这里,所述基于所述瞳孔的图像,得出瞳孔中心点的空间位置,包括:基于所述两个摄像头的空间位置、以及所述两个摄像头所采集的图像,得出瞳孔中心点的空间位置。Here, the spatial position of the pupil center point is obtained based on the image of the pupil, including: obtaining a space of a pupil center point based on a spatial position of the two cameras and an image acquired by the two cameras position.
可以理解的是,每个摄像头的空间位置可以通过三维空间坐标进行表示,在实际实施时, 可以预先设置终端上一点的坐标,之后根据该点与每个摄像头的位置关系,确定每个摄像头的三维空间坐标。It can be understood that the spatial position of each camera can be represented by three-dimensional space coordinates, in actual implementation, The coordinates of a point on the terminal can be set in advance, and then the three-dimensional space coordinates of each camera are determined according to the positional relationship between the point and each camera.
在获得两个摄像头的空间位置、以及两个摄像头所采集的图像后,可以基于双目立体视觉技术来确定瞳孔中心点的空间位置。After obtaining the spatial position of the two cameras and the images acquired by the two cameras, the spatial position of the pupil center point can be determined based on binocular stereo vision technology.
可以看出,经过眼睛中心凹和瞳孔中心点的直线实际上代表了人眼的主视线,因此,所确定的映射点为人眼的一条视线与终端显示屏的交点。It can be seen that the line passing through the center of the eye and the center point of the pupil actually represents the main line of sight of the human eye. Therefore, the determined mapping point is the intersection of a line of sight of the human eye and the display of the terminal.
第二种情况Second case
在步骤101中,利用至少一个摄像头实时采集参照物的图像,这里,对于参照物的种类不进行限制,例如,参照物为与所述终端显示屏的垂直距离最小的物体,或者,所述参照物位于人体。In step 101, the image of the reference object is collected in real time by using at least one camera. Here, the type of the reference object is not limited. For example, the reference object is an object having the smallest vertical distance from the display screen of the terminal, or the reference The object is located in the human body.
相应地,本步骤可以包括:在所述参照物的图像中选取至少一点作为参照点;基于每个参照点的图像和每个摄像头的空间位置,确定每个参照点的空间位置;基于每个参照点的空间位置和所述终端显示屏的空间位置,确定每个参照点在终端显示屏上的投影点;基于所述投影点确定与所述参照物形成预设的映射关系的映射点。Correspondingly, the step may include: selecting at least one point in the image of the reference object as a reference point; determining a spatial position of each reference point based on the image of each reference point and the spatial position of each camera; Determining a projection point of each reference point on the terminal display screen according to a spatial position of the reference point and a spatial position of the terminal display screen; and determining a mapping point that forms a preset mapping relationship with the reference object based on the projection point.
示例性地,所述在所述参照物的图像中选取一点作为参照点,包括:基于所述参照物的图像和每个摄像头的空间位置,确定所述参照物的空间位置;基于所述参照物的空间位置,将所述参照物中与终端显示屏垂直距离最小的一点作为参照点。可选地,在本公开中,参照点为参照物中与所述终端显示屏的垂直距离最小的点,例如,在参照物为人的鼻子时,参照点为鼻尖;在参照物为与所述终端显示屏的垂直距离最小的物体时,如果与所述终端显示屏的垂直距离最小的物体的人的手指,参照点为手指的指尖。Illustratively, selecting a point in the image of the reference object as a reference point comprises: determining a spatial location of the reference object based on an image of the reference object and a spatial position of each camera; based on the reference The spatial position of the object is used as a reference point to minimize the vertical distance between the reference object and the terminal display screen. Optionally, in the present disclosure, the reference point is a point at which the vertical distance from the display screen of the terminal is the smallest, for example, when the reference object is a human nose, the reference point is a nose tip; When the terminal of the terminal display screen has the smallest vertical distance, if the finger of the person whose object has the smallest vertical distance from the display screen of the terminal, the reference point is the fingertip of the finger.
这里,在摄像头个数为1时,在对所述参照物的图像进行分析前,可以实时获取所述摄像头与所述参照物的距离;基于所述参照点的图像、所述摄像头的空间位置以及所述摄像头与所述参照物的距离,确定所述参照点的空间位置。Here, when the number of cameras is 1, the distance between the camera and the reference object may be acquired in real time before analyzing the image of the reference object; based on the image of the reference point, the spatial position of the camera And determining a spatial position of the reference point by a distance between the camera and the reference object.
在摄像头个数为2时,可以在所述参照物的图像中选取一点作为参照点;基于所述参照点的图像、所述两个摄像头的空间位置,确定所述参照点的空间位置;基于所述参照点的空间位置和所述终端显示屏的空间位置,确定所述参照点在终端显示屏上的投影点;将所述投影点作为终端显示屏上与所述参照物形成预设的映射关系的映射点。When the number of cameras is 2, a point may be selected as an reference point in the image of the reference object; based on the image of the reference point and the spatial position of the two cameras, the spatial position of the reference point is determined; Determining a projection point of the reference point on the display screen of the terminal by using a spatial position of the reference point and a spatial position of the display screen of the terminal; and forming the projection point as a preset on the terminal display screen and the reference object The mapping point of the mapping relationship.
在实际实施时,可以在终端上与摄像头距离不超过第一距离门限的位置设置距离感应装置或距离检测装置,该距离感应装置或距离检测装置用于检测自身与参照物的距离,距离感应装置或距离检测装置可以是终端上设置的位移传感器或接近传感器。这里,可以将检测得到的距离作为摄像头与参照物的距离。In actual implementation, the distance sensing device or the distance detecting device may be disposed at a position on the terminal that does not exceed the first distance threshold from the camera, and the distance sensing device or the distance detecting device is configured to detect the distance between itself and the reference object, and the distance sensing device Or the distance detecting device may be a displacement sensor or a proximity sensor provided on the terminal. Here, the detected distance can be used as the distance between the camera and the reference object.
在实际实施时,可以根据终端上每个摄像头的空间位置、以及每个摄像头与终端显示屏的相对位置关系,确定终端显示屏的位置;下面结合附图对每个参照点在终端显示屏上的投影点进行直观说明。In actual implementation, the position of the terminal display screen may be determined according to the spatial position of each camera on the terminal and the relative positional relationship between each camera and the display screen of the terminal; each reference point is displayed on the terminal display screen with reference to the accompanying drawings. The projection point is visualized.
图2为本公开实施例参照点在终端显示屏上的投影点的第一示意图,如图2所示,摄像头a和摄像头b为在终端上设置的两个摄像头,摄像头a和摄像头b所处的平面与终端显示 屏所在平面重合,参照点的坐标可以表示为(X,Y,Z),参照点在终端显示屏上的投影点为:过参照点与终端显示屏所在平面垂直的直线与终端显示屏的交点。2 is a first schematic diagram of a projection point of a reference point on a display screen of a terminal according to an embodiment of the present disclosure. As shown in FIG. 2, the camera a and the camera b are two cameras disposed on the terminal, where the camera a and the camera b are located. Plane and terminal display The plane of the screen coincides. The coordinates of the reference point can be expressed as (X, Y, Z). The projection point of the reference point on the terminal display is: the intersection of the line perpendicular to the plane of the terminal display and the terminal display. .
图3为本公开实施例参照点在终端显示屏上的投影点的第二示意图,如图3所示,感应点a和感应点b均可以是在终端上设置的摄像头,终端显示屏所在平面经过感应点a和感应点b,感应层表示终端显示屏,参照点的坐标可以表示为(X’,Y’,Z’),参照点在感应层上的投影点为:过参照点与感应层所在平面垂直的直线与感应层的交点。3 is a second schematic diagram of a projection point of a reference point on a display screen of a terminal according to an embodiment of the present disclosure. As shown in FIG. 3, both the sensing point a and the sensing point b may be a camera disposed on the terminal, and the plane of the terminal display screen After the sensing point a and the sensing point b, the sensing layer represents the terminal display screen, and the coordinates of the reference point can be expressed as (X', Y', Z'), and the projection points of the reference point on the sensing layer are: over reference point and sensing The intersection of the line perpendicular to the plane and the sensing layer.
在确定投影点后,可以将所述投影点作为与所述参照物形成预设的映射关系的映射点;或者,在参照点个数为2时,确定两个参照点在终端显示屏上的投影点,将确定的两个投影点的连线的中点作为与所述参照物形成预设的映射关系的映射点。After determining the projection point, the projection point may be used as a mapping point for forming a preset mapping relationship with the reference object; or, when the reference point number is 2, determining two reference points on the terminal display screen The projection point is a mapping point of a line connecting the determined two projection points as a mapping point that forms a preset mapping relationship with the reference object.
在一个实现方式中,所述参照物包括人的两只眼睛;将所述人的两只眼睛的瞳孔中心点分别作为参照点;之后,确定人的两只眼睛的瞳孔中心点在终端显示屏上的投影点,将确定的两个投影点的连线的中点作为与所述参照物形成预设的映射关系的映射点。In one implementation, the reference object includes two eyes of a person; the pupil center points of the two eyes of the person are respectively used as reference points; and then, the pupil center point of the two eyes of the person is determined at the terminal display screen The projection point on the upper point of the line connecting the determined two projection points is used as a mapping point for forming a preset mapping relationship with the reference object.
第三种情况Third case
参照物为人的一只眼睛;在步骤101中,实时采集人眼睛中的图像。The reference object is one eye of a person; in step 101, an image in the human eye is collected in real time.
相应地,本步骤可以包括:将终端显示屏当前显示内容中与所采集的人眼睛中的图像匹配的区域确定为:屏幕匹配区域;在所述屏幕匹配区域内选取一点作为与所述参照物形成预设的映射关系的映射点。Correspondingly, the step may include: determining, in the current display content of the terminal display, an area matching the collected image in the human eye as: a screen matching area; selecting a point in the screen matching area as the reference object A mapping point that forms a preset mapping relationship.
可选地,在本公开中,在对所述参照物的图像进行分析之前,还可以获取终端显示屏与所述参照物的距离;当所获取的距离处于设定区间时,执行步骤102,否则,如果所获取的距离不处于设定区间时,直接结束流程。Optionally, in the present disclosure, before analyzing the image of the reference object, the distance between the terminal display screen and the reference object may also be acquired; when the acquired distance is in the set interval, step 102 is performed; otherwise, If the acquired distance is not within the set interval, the process ends directly.
这里,设定区间为用于表示距离的区间,例如,设定区间为[0cm,5cm],或者为[30cm,50cm]等等。Here, the setting section is a section for indicating a distance, for example, the setting section is [0 cm, 5 cm], or [30 cm, 50 cm] or the like.
可选地,在本公开中,在得出映射点后,还可以在终端显示屏的映射点处显示光标,便于用户观察。Optionally, in the present disclosure, after the mapping point is obtained, the cursor may also be displayed at the mapping point of the terminal display screen for the user to observe.
步骤103:生成所述映射点处的功能指令,基于所述功能指令实现对终端显示屏的操作。Step 103: Generate a function instruction at the mapping point, and implement an operation on the display screen of the terminal based on the function instruction.
这里,本步骤生成的映射点处的功能指令可以是:指示点击当前映射点处的功能指令、指示长按当前映射点处的功能指令或指示进行滑屏操作的指令等等。指示进行滑屏操作的指令可以是指示进行纵向滑屏操作的指令或指示进行横向滑屏操作的指令;指示进行纵向滑屏操作的指令可以是指示进行向上滑屏操作的指令或指示进行向下滑屏操作的指令,指示进行横向滑屏操作的指令可以是指示进行向左滑屏操作的指令或指示进行向右滑屏操作的指令。Here, the function instruction at the mapping point generated by the step may be: a function instruction indicating that the current mapping point is clicked, a function instruction indicating that the current mapping point is long pressed, an instruction to perform a sliding operation, and the like. The instruction indicating the performing the sliding operation may be an instruction indicating a vertical sliding operation or an instruction indicating a horizontal sliding operation; the instruction indicating the vertical sliding operation may be an instruction or an instruction indicating that the upward sliding operation is performed to slide down The instruction of the screen operation, indicating that the instruction to perform the landscape sliding operation may be an instruction to perform a left-slide operation or an instruction to perform a right-slide operation.
本步骤中,可以采用以下两种方式生成所述映射点处的功能指令。In this step, the function instructions at the mapping point may be generated in the following two manners.
第一种方式:确定映射点处于终端显示屏上映射点区域的时间;基于所确定的时间的大小,生成所述映射点处的功能指令;所述映射点区域包括所述终端显示屏上的映射点的初始位置。a first mode: determining a time when the mapping point is in a mapping point area on the terminal display screen; generating a function instruction at the mapping point based on the determined time size; the mapping point area includes The initial position of the mapped point.
在实际实施时,可以由终端的处理器确定映射点处于映射点区域的时间;之后,终端的处理器可以基于所确定的时间的大小,生成映射点处的功能指令。 In actual implementation, the time at which the mapping point is in the mapping point region may be determined by the processor of the terminal; afterwards, the processor of the terminal may generate a function instruction at the mapping point based on the determined size of the time.
这里,映射点区域可以是所述终端显示屏上包括映射点的初始位置的一个区域;映射点区域的面积小于等于设定门限,例如,映射点的初始位置为A点,则映射点区域可以是包括A点在内的面积小于设定门限的一个区域,设定门限为0.2cm2、0.3cm2等等;设置区域的边界所呈现的形状可以是圆形、椭圆形、多边形等等。Here, the mapping point area may be an area on the display screen of the terminal including the initial position of the mapping point; the area of the mapping point area is less than or equal to the set threshold. For example, if the initial position of the mapping point is point A, the mapping point area may be It is an area including the point A that is smaller than the set threshold, and the threshold is set to be 0.2 cm 2 , 0.3 cm 2 , etc.; the shape of the boundary of the set area may be a circle, an ellipse, a polygon, or the like.
优选地,映射点区域可以是以映射点的初始位置为圆心的一个圆形区域,例如,映射点区域可以是以映射点的初始位置为圆心,并以设定长度作为半径的一个圆形区域。Preferably, the mapping point area may be a circular area centered on the initial position of the mapping point. For example, the mapping point area may be a circular area with the initial length of the mapping point as a center and a set length as a radius. .
示例性地,基于所确定的时间的大小,生成所述映射点处的功能指令,包括:所确定的时间处于第一设定范围时,生成指示点击当前映射点处的指令;所确定的时间处于第二设定范围时,生成指示长按当前映射点处的指令;所确定的时间处于第三设定范围时,生成指示进行滑屏操作的指令;所述第一设定范围、第二设定范围和第三设定范围两两之间不形成重叠,即,所述第一设定范围、第二设定范围和第三设定范围两两之间不存在交集。Exemplarily, generating, according to the determined size of the time, the function instruction at the mapping point, comprising: when the determined time is in the first setting range, generating an instruction indicating that the current mapping point is clicked; the determined time When in the second setting range, generating an instruction to long press the current mapping point; when the determined time is in the third setting range, generating an instruction to perform a sliding operation; the first setting range, the second There is no overlap between the set range and the third set range, that is, there is no intersection between the first set range, the second set range, and the third set range.
例如,第一设定范围表示为区间1,第二设定范围表示为区间2,第三设定范围表示为区间3,区间1、区间2和区间3均用于表征时间的取值范围,每个区间可以是开区间、闭区间或半开半闭区间,但是,区间1、区间2和区间3两两之间不存在交集。For example, the first set range is represented as interval 1, the second set range is represented as interval 2, and the third set range is represented as interval 3, and interval 1, interval 2, and interval 3 are both used to represent the range of values of time. Each interval may be an open interval, a closed interval, or a half open half-closed interval, but there is no intersection between the interval 1, the interval 2, and the interval 3.
可以看出,由于第一设定范围、第二设定范围和第三设定范围两两之间不形成重叠,说明映射点处于设置区域的时间最多只能处于上述三个设定范围中的一个设定范围内,如此,可以确保最多生成一种功能指令。It can be seen that since there is no overlap between the first setting range, the second setting range and the third setting range, the time when the mapping point is in the setting area can only be in the above three setting ranges. Within a setting range, this ensures that at most one function instruction is generated.
在一个优选的实施例中,第一设定范围为[a1、a2],其中,a1表示第一时间阈值,a2表示第二时间阈值,a1小于a2;第二设定范围为(a2,∞),第三设定范围为(0,a1)。In a preferred embodiment, the first setting range is [a1, a2], wherein a1 represents a first time threshold, a2 represents a second time threshold, a1 is less than a2, and the second setting range is (a2, ∞ ), the third setting range is (0, a1).
例如,第一时间阈值a1为5秒,第二时间阈值a2为10秒;映射点区域为以映射点的初始位置为圆心,并以0.3cm作为半径的一个圆形区域;映射点区域为映射点的初始位置的等同范围;由于参照物的图像可能发生变化,则映射点的位置也会发生相应的变化,如此,可以由终端的处理器记录映射点处于映射点区域的时间,在映射点在映射点区域的持续停留时间大于等于5秒而小于10秒时,才生成指示点击当前映射点处的功能指令;在映射点在映射点区域的持续停留时间大于10秒小于50秒时,生成指示长按当前映射点处的功能指令。For example, the first time threshold a1 is 5 seconds, and the second time threshold a2 is 10 seconds; the mapping point area is a circular area centered on the initial position of the mapping point and having a radius of 0.3 cm; the mapping point area is a mapping The equivalent range of the initial position of the point; since the image of the reference object may change, the position of the mapping point also changes accordingly, so that the time of the mapping point in the mapping point area can be recorded by the processor of the terminal, at the mapping point When the duration of the mapping point area is greater than or equal to 5 seconds and less than 10 seconds, a function instruction indicating that the current mapping point is clicked is generated; when the mapping point has a duration of staying in the mapping point area of more than 10 seconds and less than 50 seconds, the generation is generated. Indicates that the function command at the current map point is long pressed.
可选地,在本公开中,若映射点在映射点区域的持续停留时间处于第三设定范围,如映射点在映射点区域的持续停留时间小于5秒,则可以确定映射点发生幅度较大的移动,在这种情况下,生成指示进行滑屏操作的指令;在实际实施时,在视线移动时,终端连续收集映射点的坐标;通过坐标点位置的变化及变化所耗用的时间,分析出映射点的移动方向和移动速率。Optionally, in the disclosure, if the duration of the mapping point in the mapping point area is in the third setting range, if the duration of the mapping point in the mapping point area is less than 5 seconds, it may be determined that the mapping point occurs. Large movement, in this case, an instruction to indicate a sliding operation is generated; in actual implementation, when the line of sight moves, the terminal continuously collects the coordinates of the mapped point; the time taken by the change and change of the position of the coordinate point , analyze the moving direction and moving speed of the mapping point.
在实际实施时,可以在终端显示屏所在平面建立二维直角坐标系,该二维直角坐标系记为屏幕坐标系;将该屏幕坐标系的X轴设置为终端显示屏的横向方向,将该屏幕坐标系的Y轴设置为终端显示屏的纵向方向;另外,还可以将映射点的空间位置以屏幕坐标系的坐标进行表示。屏幕坐标系的X轴正向为水平向右方向,屏幕坐标系的Y轴正向为竖直向上方向。In actual implementation, a two-dimensional Cartesian coordinate system can be established on the plane of the terminal display screen, and the two-dimensional Cartesian coordinate system is recorded as a screen coordinate system; the X-axis of the screen coordinate system is set to the lateral direction of the terminal display screen, The Y axis of the screen coordinate system is set to the longitudinal direction of the terminal display screen; in addition, the spatial position of the mapped point can also be represented by the coordinates of the screen coordinate system. The X-axis of the screen coordinate system is in the horizontal direction to the right, and the Y-axis in the screen coordinate system is in the vertical direction.
可以理解的是,根据映射点在屏幕坐标系中的坐标的X值的变化,计算映射点横向移动距离,再根据映射点在屏幕坐标系中的坐标的X值变化所耗用的时间,计算映射点的横向移 动速率;这里,横向移动方向为向左或向右;例如,屏幕坐标系的X轴正向为水平向右方向;在1.5秒时间内,若映射点在屏幕坐标系中的坐标的X值由0增加到30,则映射点的横向移动速率为20每秒,横向移动方向为向右;在1.5秒时间内,若映射点在屏幕坐标系中的坐标的X值由15减小到0,则映射点的横向移动速率为10每秒,横向移动方向为向左。It can be understood that, according to the change of the X value of the coordinates of the mapping point in the screen coordinate system, the lateral moving distance of the mapping point is calculated, and then the time taken according to the change of the X value of the coordinate of the mapping point in the screen coordinate system is calculated. Horizontal shift of the mapped point Movement rate; here, the lateral movement direction is leftward or rightward; for example, the X-axis positive direction of the screen coordinate system is horizontal to the right direction; if the mapping point is in the screen coordinate system, the X value is within 1.5 seconds. Increase from 0 to 30, the lateral movement rate of the mapping point is 20 per second, and the lateral movement direction is to the right; in 1.5 seconds, if the X value of the coordinates of the mapping point in the screen coordinate system is reduced from 15 to 0 , the lateral movement rate of the mapping point is 10 seconds, and the lateral movement direction is leftward.
同理,还可以根据映射点在屏幕坐标系中的坐标的Y值的变化,计算映射点纵向移动距离,再根据映射点在屏幕坐标系中的坐标的Y值变化所耗用的时间,计算映射点的纵向移动速率;这里,纵向移动方向为向上或向下;例如,屏幕坐标系的Y轴正向为竖直向上方向;在1秒时间内,若映射点在屏幕坐标系中的坐标的Y值由0增加到18,则映射点的纵向移动速率为18每秒,纵向移动方向为向上;在1秒时间内,若映射点在屏幕坐标系中的坐标的Y值由0减小到-10,则映射点的纵向移动速率为10每秒,纵向移动方向为向下。Similarly, the longitudinal movement distance of the mapping point can be calculated according to the change of the Y value of the coordinates of the mapping point in the screen coordinate system, and then calculated according to the time taken by the change of the Y value of the coordinate of the mapping point in the screen coordinate system. The longitudinal movement rate of the mapped point; here, the longitudinal movement direction is upward or downward; for example, the Y-axis of the screen coordinate system is in the vertical upward direction; if the mapping point is in the screen coordinate system within 1 second, The Y value increases from 0 to 18, the longitudinal movement rate of the mapping point is 18 per second, and the longitudinal movement direction is upward; in 1 second, if the Y value of the coordinate of the mapping point in the screen coordinate system is decreased by 0. To -10, the vertical movement rate of the mapped point is 10 per second, and the longitudinal movement direction is downward.
在一个实现方式中,若映射点的纵向移动速率大于映射点的横向移动速率,则生成指示进行纵向滑屏操作的指令,即生成指示进行上下滑屏操作的指令;纵向滑屏操作的方向为向上或向下;例如,映射点的纵向移动速率为b1,映射点的横向移动速率为b2,在b1大于b2时,生成指示进行向上滑屏操作的指令,或生成指示进行向下滑屏操作的指令。In an implementation manner, if the longitudinal movement rate of the mapping point is greater than the lateral movement rate of the mapping point, an instruction indicating that the vertical sliding operation is performed is generated, that is, an instruction indicating that the upper sliding screen operation is performed is generated; and the direction of the vertical sliding screen operation is Up or down; for example, the vertical movement rate of the mapping point is b1, the lateral movement rate of the mapping point is b2, and when b1 is greater than b2, an instruction is generated to indicate an upward sliding operation, or an indication is generated to perform a sliding screen operation. instruction.
若映射点的横向移动速率大于映射点的纵向移动速率,则生成指示进行横向滑屏操作的指令,即生成指示进行左右滑屏操作的指令;横向滑屏操作的方向为向左或向右;例如,映射点的横向移动速率为b3,映射点的纵向移动速率为b4,在b3大于b4时,生成指示进行向左滑屏操作的指令,或生成指示进行向右滑屏操作的指令。If the lateral movement rate of the mapping point is greater than the longitudinal movement rate of the mapping point, generating an instruction indicating that the horizontal sliding operation is performed, that is, generating an instruction indicating that the left and right sliding operation is performed; and the direction of the horizontal sliding operation is leftward or rightward; For example, the lateral movement rate of the mapping point is b3, the longitudinal movement rate of the mapping point is b4, and when b3 is greater than b4, an instruction to perform a left-slide operation or an instruction to perform a right-slide operation is generated.
特别地,当映射点的横向移动速率等于映射点的纵向移动速率,可以不生成任何功能指令,也可以生成指示进行纵向滑屏操作的指令或指示进行横向滑屏操作的指令。In particular, when the lateral movement rate of the mapping point is equal to the longitudinal movement rate of the mapping point, any function instruction may not be generated, and an instruction indicating a vertical sliding operation or an instruction indicating a horizontal sliding operation may be generated.
在实际实施时,可以利用终端的处理器记录映射点变化的起始点和终结点在屏幕坐标系中的坐标;可以将映射点变化的起始点到终结点的纵向变化方向设为进行滑屏操作的方向,也可以将映射点变化的终结点到起始点的纵向变化方向设为进行滑屏操作的方向;例如,将映射点变化的起始点(映射点的初始位置)在屏幕坐标系中的坐标的Y值记为c1,将映射点变化的终结点在屏幕坐标系中的坐标的Y值记为c2,若c1小于c2,则映射点变化的起始点到终结点的纵向变化方向为:向上的方向,映射点变化的终结点到起始点的纵向变化方向为:向下的方向;此时,可以将向上的方向或向下的方向设为进行滑屏操作的方向。In actual implementation, the processor of the terminal may be used to record the starting point of the mapping point change and the coordinate of the ending point in the screen coordinate system; the starting point of the mapping point change to the longitudinal direction of the ending point may be set as a sliding operation. Orientation, the direction of the change from the endpoint of the mapping point change to the starting point is also set to the direction in which the sliding operation is performed; for example, the starting point of the mapping point change (the initial position of the mapping point) is in the screen coordinate system. The Y value of the coordinate is recorded as c1, and the Y value of the coordinate of the endpoint of the mapping point change in the screen coordinate system is recorded as c2. If c1 is smaller than c2, the longitudinal change direction of the starting point of the mapping point change to the end point is: In the upward direction, the direction in which the mapping point changes from the end point to the starting point is: the downward direction; at this time, the upward direction or the downward direction can be set to the direction in which the sliding operation is performed.
同样地,还可以将映射点变化的起始点到终结点的横向变化方向设为进行滑屏操作的方向,也可以将映射点变化的终结点到起始点的横向变化方向设为进行滑屏操作的方向;例如,将映射点变化的起始点在屏幕坐标系中的坐标的X值记为d1,将映射点变化的终结点在屏幕坐标系中的坐标的X值记为d2,若d1小于d2,则映射点变化的起始点到终结点的横向变化方向为:向右的方向,映射点变化的终结点到起始点的横向变化方向为:向左的方向;此时,可以将向右的方向或向左的方向设为进行滑屏操作的方向。Similarly, the direction of the change from the starting point of the mapping point change to the end point of the mapping point may be set to the direction in which the sliding operation is performed, or the direction of the lateral change of the changing point of the mapping point to the starting point may be set as the sliding operation. For example, the X value of the coordinates of the starting point of the mapping point change in the screen coordinate system is recorded as d1, and the X value of the coordinate of the end point of the mapping point change in the screen coordinate system is recorded as d2, if d1 is smaller than D2, the direction of the horizontal change from the starting point of the mapping point change to the end point is: the direction to the right, the direction of the lateral change from the end point of the mapping point change to the starting point is: the direction to the left; at this time, the rightward direction can be The direction or the left direction is set to the direction in which the slide operation is performed.
在另一个实现方式中,当映射点的横向移动速率大于映射点的纵向移动速率时,并不直接生成指示进行横向滑屏操作的指令,而是继续判断映射点的横向移动速率是否满足第一设定条件;如果映射点的横向移动速率满足第一设定条件,则生成指示进行横向滑屏的指令; 如果映射点的横向移动速率不满足第一设定条件,则不生成任何指令。In another implementation manner, when the lateral movement rate of the mapping point is greater than the longitudinal movement rate of the mapping point, the instruction indicating the horizontal sliding operation is not directly generated, but the determination of whether the lateral movement rate of the mapping point satisfies the first Setting a condition; if the lateral movement rate of the mapping point satisfies the first setting condition, generating an instruction indicating that the horizontal sliding screen is performed; If the lateral movement rate of the map point does not satisfy the first set condition, no instruction is generated.
这里,第一设定条件可以是:映射点的横向移动速率在第四设定范围内,第四设定范围可以是大于v1,也可以是小于v2,也可以是在v3和v4之间,v3不等于v4,v1、v2、v3和v4均可以由终端的用户进行自行设定,也就是说,v1、v2、v3和v4均可以是设定的速率值。Here, the first setting condition may be that the lateral movement rate of the mapping point is within the fourth setting range, and the fourth setting range may be greater than v1, or may be less than v2, or may be between v3 and v4. V3 is not equal to v4, and v1, v2, v3, and v4 can all be set by the user of the terminal, that is, v1, v2, v3, and v4 can all be set rate values.
同理,当映射点的纵向移动速率大于映射点的横向移动速率时,并不直接生成指示进行纵向滑屏操作的指令,而是继续判断映射点的纵向移动速率是否满足第二设定条件,如果映射点的纵向移动速率满足第二设定条件,则生成指示进行纵向滑屏的指令;如果映射点的纵向变化速率不满足第一设定条件,则不生成任何指令。Similarly, when the longitudinal movement rate of the mapping point is greater than the lateral movement rate of the mapping point, the instruction indicating the vertical sliding operation is not directly generated, but the determination of whether the longitudinal movement rate of the mapping point satisfies the second setting condition is continued. If the longitudinal movement rate of the map point satisfies the second set condition, an instruction is instructed to perform the vertical slide screen; if the longitudinal change rate of the map point does not satisfy the first set condition, no instruction is generated.
这里,第二设定条件可以是:映射点的纵向移动速率在第五设定范围内,第五设定范围可以是大于v5,也可以是小于v6,也可以是在v7和v8之间,v7不等于v8,v5、v6、v7和v8均可以由终端的用户进行自行设定,也就是说,v5、v6、v7和v8均可以是设定的速率值。Here, the second setting condition may be that the longitudinal movement rate of the mapping point is within the fifth setting range, and the fifth setting range may be greater than v5, or may be less than v6, or may be between v7 and v8. V7 is not equal to v8, v5, v6, v7 and v8 can be set by the user of the terminal, that is, v5, v6, v7 and v8 can all be set rate values.
第二种方式:在生成所述映射点处的功能指令前,对所述终端的用户的动作连续进行图像采集,得到用户的动作图像;对所述用户的动作图像进行图像识别,得到识别结果;基于所述识别结果生成所述映射点处的功能指令。The second mode: before the function instruction at the mapping point is generated, continuously performing image acquisition on the action of the user of the terminal to obtain an action image of the user; performing image recognition on the action image of the user to obtain a recognition result Generating a function instruction at the mapping point based on the recognition result.
这里,可以利用终端的图像采集装置采集用户的动作图像,之后,利用终端的处理器对所述用户的动作图像进行识别,基于识别结果生成所述映射点处的功能指令;例如,可以使用前摄像头捕捉用户的头部的图像变化,进而采集到用户的动作图像。Here, the image capture device of the terminal may be used to collect the motion image of the user, and then the processor of the terminal is used to identify the motion image of the user, and the function instruction at the mapping point is generated based on the recognition result; for example, before using The camera captures the image changes of the user's head, and then collects the user's motion image.
示例性地,在所述识别结果为眨眼动作、嘴张开动作或嘴闭合动作时,生成指示点击映射点处的指令;在所述识别结果为点头动作时,生成指示进行向下滑屏操作的指令;在所述识别结果为抬头动作时,生成指示进行向上滑屏操作的指令;在所述识别结果为左右摇头动作时,生成指示进行向左滑屏操作或向右滑屏操作的指令。Illustratively, when the recognition result is a blinking action, a mouth opening motion or a mouth closing motion, an instruction indicating that the click mapping point is generated is generated; and when the recognition result is a nodding motion, generating an instruction to perform the sliding screen operation And an instruction to generate an instruction to perform an upward sliding operation when the recognition result is a head-up operation; and an instruction to perform a left-slide operation or a right-slide operation when the recognition result is a left-right shaking action.
在实际实施时,可以利用终端的处理器生成所述映射点的功能指令,之后,终端基于该功能指令自动实现对终端显示屏的操作,也就是说,终端可以基于该功能指令自动实现对终端显示屏的操作,并不需要用户触摸显示屏。In actual implementation, the processor of the terminal may be used to generate a function instruction of the mapping point, and then the terminal automatically implements operation on the display screen of the terminal based on the function instruction, that is, the terminal may automatically implement the terminal based on the function instruction. The operation of the display does not require the user to touch the display.
这里,对终端显示屏的操作与功能指令相对应;示例性地,当映射点处的功能指令为指示点击当前映射点处的指令时,可以基于该指令实现对映射点的点击操作;当映射点处的功能指令为长按当前映射点处的指令时,可以基于该指令实现对映射点的长按操作;当映射点处的功能指令为指示进行滑屏操作的指令时,可以基于该指令实现滑屏操作。Here, the operation of the terminal display screen corresponds to the function instruction; exemplarily, when the function instruction at the mapping point indicates that the instruction at the current mapping point is clicked, the click operation on the mapping point can be implemented based on the instruction; When the function instruction at the point is long pressing the instruction at the current mapping point, the long pressing operation on the mapping point can be implemented based on the instruction; when the function instruction at the mapping point is an instruction indicating the sliding operation, the instruction can be based on the instruction Achieve sliding operation.
在实现对终端显示屏的操作后,可以将操作效果显示在终端显示屏上,例如,在实现对映射点的点击操作时,可以达到打开菜单、退出菜单的效果,当实现对映射点的长按操作时,可以实现长按菜单的效果;在实现滑屏操作时,可以实现翻页处理的效果。After the operation of the display screen of the terminal is realized, the operation effect can be displayed on the display screen of the terminal. For example, when the click operation on the mapping point is implemented, the effect of opening the menu and exiting the menu can be achieved, and when the mapping point is realized, When the operation is pressed, the effect of long-pressing the menu can be realized; when the sliding operation is implemented, the effect of turning the page can be realized.
在本公开的操作终端的方法的实施例中,可以通过参照物的图像进行分析,得出到终端显示屏上的映射点,进而基于映射点处的功能指令,完成点击、长按、滑屏等操作,实现非触屏方式操作终端;不需要用手指触摸屏幕,仅根据参照物图像就可以操作终端;对于当前 存在由终端显示屏尺寸变大导致的手指操作终端不方便的技术问题,本公开能够有效提高人机交互效率,提高了终端的操作性,并提升了用户体验。In the embodiment of the method for operating the terminal of the present disclosure, the mapping point on the display screen of the terminal can be obtained by analyzing the image of the reference object, and then clicking, long pressing, and sliding screen are completed based on the function instruction at the mapping point. Waiting for operation, realize non-touch screen operation terminal; do not need to touch the screen with a finger, and can operate the terminal only according to the reference object image; There is a technical problem that the finger operation terminal is inconvenient due to the size of the terminal display screen. The disclosure can effectively improve the efficiency of human-computer interaction, improve the operability of the terminal, and improve the user experience.
为了能更加体现本公开的目的,在本公开上述实施例的基础上,进行进一步的举例说明。In order to further embodies the purpose of the present disclosure, further exemplifications are made on the basis of the above-described embodiments of the present disclosure.
图4为本公开实施例操作终端的方法的一个实现方案的流程图,如图4所示,该流程包括:FIG. 4 is a flowchart of an implementation manner of a method for operating a terminal according to an embodiment of the present disclosure. As shown in FIG. 4, the process includes:
步骤401:检测终端是否打开了定位功能,如果没有打开定位功能,则直接结束流程,此时终端无响应,不生成功能指令;如果终端打开了定位功能,则跳至步骤402。Step 401: It is detected whether the positioning function is turned on by the terminal. If the positioning function is not enabled, the process is directly ended. At this time, the terminal does not respond and does not generate a function instruction; if the terminal opens the positioning function, then the process goes to step 402.
步骤402:检测终端显示屏上方是否有物体,如果终端上方没有物体,则直接结束流程,此时终端无响应,不生成功能指令;如果终端显示屏上方有物体,则跳至步骤403。Step 402: Detect whether there is an object above the display screen of the terminal. If there is no object above the terminal, the process ends directly. At this time, the terminal does not respond and does not generate a function instruction; if there is an object above the display screen of the terminal, then the process goes to step 403.
这里,可以采用距离检测装置和距离感应装置,在距离检测装置的检测范围和距离感应装置的感应范围内,确定终端显示屏上方是否有物体。Here, the distance detecting device and the distance sensing device may be used to determine whether there is an object above the display screen of the terminal within the sensing range of the distance detecting device and the sensing range of the distance sensing device.
步骤403:检测终端上是否设置有感应空间范围,如果否,则终端计算出参照物在终端屏幕上的映射点的坐标位置,终端根据该映射点的坐标位置、以及映射点的移动方向和移动速率,完成点选、长按、滑屏的命令操作;如果终端上设置有感应空间范围,则跳至步骤404。Step 403: Detect whether a sensing space range is set on the terminal. If not, the terminal calculates a coordinate position of a mapping point of the reference object on the terminal screen, and the terminal according to the coordinate position of the mapping point, and the moving direction and movement of the mapping point. Rate, complete the command operation of clicking, long press, and sliding screen; if the sensing space range is set on the terminal, skip to step 404.
本步骤中,感应空间范围与上述实施例中的表示距离的设定区间相同。In this step, the sensing space range is the same as the setting interval of the display distance in the above embodiment.
步骤404:在物体处于感应空间范围时,终端计算参照物在终端屏幕上的映射点的坐标位置,跳至步骤405。Step 404: When the object is in the sensing space range, the terminal calculates the coordinate position of the reference point of the reference object on the terminal screen, and jumps to step 405.
需要说明的是,当物体不处于感应空间范围时,终端无响应。It should be noted that when the object is not in the sensing space range, the terminal does not respond.
步骤405:终端根据该映射点的坐标位置、以及映射点的移动方向和移动速率,完成点选、长按、滑屏的命令操作。Step 405: The terminal completes the command operation of clicking, long pressing, and sliding screen according to the coordinate position of the mapping point and the moving direction and moving speed of the mapping point.
为了能更加体现本公开的目的,在本公开上述实施例的基础上,进行进一步的举例说明。In order to further embodies the purpose of the present disclosure, further exemplifications are made on the basis of the above-described embodiments of the present disclosure.
在本公开的实施例中,在终端的显示屏所在面配置两个摄像头,这两个摄像头分别标记为摄像头A3和摄像头B3,摄像头A3和摄像头B3的空间位置可以通过三维空间坐标进行表示,其中,摄像头A3的三维空间坐标为(xa3、ya3、za3),摄像头B3的三维空间坐标为(xb3、yb3、zb3);在实际实施时,可以预先设置终端上一点的坐标,之后根据该点与每个摄像头的位置关系,确定每个摄像头的三维空间坐标。In the embodiment of the present disclosure, two cameras are disposed on the surface of the display screen of the terminal, and the two cameras are respectively labeled as the camera A3 and the camera B3, and the spatial positions of the camera A3 and the camera B3 can be represented by three-dimensional space coordinates, wherein The three-dimensional space coordinates of the camera A3 are (x a3 , y a3 , z a3 ), and the three-dimensional space coordinates of the camera B3 are (x b3 , y b3 , z b3 ); in actual implementation, the coordinates of a point on the terminal may be preset. Then, according to the positional relationship between the point and each camera, the three-dimensional space coordinates of each camera are determined.
这里,可以设置参照物,参照物可以是人的一只眼睛;图5为本公开实施例中眼睛的中心凹和瞳孔的示意图,如图5所示,眼睛中心凹标记为E1,瞳孔中心点标记为E2,眼睛中心凹为人眼的成像最清晰处;过眼睛中心凹E1和瞳孔中心点E2的直线可以是人眼的一个主要视线,可以将该主要视线记为视线L。Here, a reference object may be provided, and the reference object may be one eye of a person; FIG. 5 is a schematic view of a fovea and a pupil of the eye in the embodiment of the present disclosure, as shown in FIG. 5, the fovea of the eye is marked as E1, the center point of the pupil Marked as E2, the fovea of the eye is the clearest image of the human eye; the line passing through the fovea E1 of the eye and the center point E2 of the pupil may be a main line of sight of the human eye, and the main line of sight may be recorded as the line of sight L.
这里,可以获取眼睛中心凹和瞳孔中心点的空间位置,所获取的眼睛中心凹E1的空间位置的三维空间坐标为(x1、y1,z1),所获取的瞳孔中心点E2的空间位置的三维空间坐标为(x2、y2,z2)。Here, the spatial position of the fovea of the eye and the center point of the pupil can be obtained, and the three-dimensional space coordinates of the spatial position of the acquired fovea E1 are (x1, y1, z1), and the spatial position of the acquired pupil center point E2 is three-dimensional. The space coordinates are (x2, y2, z2).
在实际实施时,可以利用上述两个摄像头分别采集眼睛中心凹的图像,也可以利用上述两个摄像头分别采集瞳孔中心点的图像;这里,可以采集眼睛的图像,之后利用图像识别或图像匹配技术,得到眼睛的瞳孔的图像,最后根据眼睛的瞳孔的图像确定瞳孔中心点的图像。 In actual implementation, the above two cameras can be used to separately collect the image of the concave center of the eye, or the two cameras can be used to separately collect the image of the pupil center point; here, the image of the eye can be collected, and then the image recognition or image matching technology is utilized. An image of the pupil of the eye is obtained, and finally an image of the pupil center point is determined based on the image of the pupil of the eye.
在图像采集完毕后,基于上述两个摄像头所采集的眼睛中心凹的图像以及上述两个摄像头的空间位置,确定眼睛中心凹的空间位置;还可以基于上述两个摄像头所采集的瞳孔中心点的图像以及上述两个摄像头的空间位置,确定瞳孔中心点的空间位置。After the image acquisition is completed, based on the image of the fovea of the eye collected by the two cameras and the spatial position of the two cameras, the spatial position of the fovea of the eye is determined; and the center point of the pupil collected by the two cameras may also be The image and the spatial position of the two cameras above determine the spatial position of the pupil center point.
这里,在确定眼睛中心凹或瞳孔中心点的空间位置时,可以利用双目立体视觉技术来实现;下面通过图6进行说明。Here, in determining the spatial position of the fovea of the eye or the center point of the pupil, it can be realized by binocular stereoscopic vision; this will be described below with reference to FIG.
图6为本公开实施例利用双摄像头确定空间一点的位置的原理示意图,如图6所示,两个摄像头分别记为Ol和Or,两个摄像头的焦距均为f,两个摄像头之间的距离为T,这里,以一个摄像头所在位置为原点而建立XYZ三维直角坐标系,其中,X轴方向为两个摄像头的连续方向,Y轴与X轴垂直,Z轴方向与每个摄像头的主光轴(Principal Ray)方向平行;绘制与摄像头Ol的主光轴垂直的左成像平面,摄像头光心到该左成像平面的垂直距离为焦距f,在该左成像平面建立左成像平面坐标系,该左成像平面坐标系的两个坐标轴分别为xl轴和yl轴,xl轴与X轴平行,yl轴与Y轴平行;同理,绘制与摄像头Or的主光轴垂直的右成像平面,摄像头光心到该右成像平面的垂直距离为焦距f,在该右成像平面建立右成像平面坐标系,该右成像平面坐标系的两个坐标轴分别为xl轴和yl轴,xl轴与X轴平行,yl轴与Y轴平行。FIG. 6 is a schematic diagram of the principle of determining the position of a space point by using a dual camera according to an embodiment of the present disclosure. As shown in FIG. 6 , the two cameras are respectively recorded as O l and O r , and the focal lengths of the two cameras are f, and the two cameras are The distance between the two is T. Here, the XYZ three-dimensional Cartesian coordinate system is established with the position of a camera as the origin, wherein the X-axis direction is the continuous direction of the two cameras, the Y-axis is perpendicular to the X-axis, and the Z-axis direction is associated with each camera. The main optical axis (Principal Ray) direction is parallel; draw a left imaging plane perpendicular to the main optical axis of the camera O l , the vertical distance from the camera optical center to the left imaging plane is the focal length f, and the left imaging plane is established in the left imaging plane two coordinate axes, the left imaging plane coordinate system are x l y l-axis and the axes, x l axis parallel to X axis, y l and Y axes parallel; Similarly, the camera main draw of O r The right imaging plane perpendicular to the optical axis, the vertical distance from the optical center of the camera to the right imaging plane is the focal length f, and the right imaging plane coordinate system is established in the right imaging plane, and the two coordinate axes of the right imaging plane coordinate system are respectively x l Axis and y l axis, The x l axis is parallel to the X axis, and the y l axis is parallel to the Y axis.
结合图6,摄像头Ol的主光轴与左成像平面的交点在左成像平面坐标系表示为(cx1,cy1),摄像头Or的主光轴与右成像平面的交点在右成像平面坐标系表示为(cx2,cy2);空间中一点P与摄像头Ol的光心的连线与左成像平面的交点记为Pl,点P与摄像头Or的光心的连线与右成像平面的交点记为Pr;在实际实施时,可以根据两个摄像头的焦距f、两个摄像头之间的距离T、点Pl在左成像平面坐标系的坐标、点Pr在右成像平面坐标系的坐标,并基于双目立体视觉原理得出点P在上述XYZ三维直角坐标系的坐标。In conjunction with FIG. 6, the intersection of the main optical axis camera O l to the left imaging plane in the left imaging plane coordinate system is expressed as (c x1, c y1), the intersection of the main optical axis of the camera O r of the right image plane of the right imaging plane The coordinate system is expressed as (c x2 , c y2 ); the intersection of the point P in the space with the optical center of the camera O l and the left imaging plane is denoted as P l , and the connection between the point P and the optical center of the camera O r The intersection of the right imaging plane is denoted as P r ; in actual implementation, according to the focal length f of the two cameras, the distance T between the two cameras, the point P l in the coordinates of the left imaging plane coordinate system, the point P r is on the right The coordinates of the plane coordinate system are imaged, and the coordinates of the point P in the above XYZ three-dimensional Cartesian coordinate system are obtained based on the binocular stereo vision principle.
在确定眼睛中心凹和瞳孔中心点的空间位置后,可以基于眼睛中心凹和瞳孔中心点的空间位置,确定经过眼睛中心凹和瞳孔中心点的直线与终端显示屏的交点;下面通过图7进行示例性的说明。After determining the spatial position of the fovea of the eye and the center point of the pupil, the intersection of the line passing through the center of the eye and the center point of the pupil can be determined based on the spatial position of the fovea of the eye and the center point of the pupil; An illustrative description.
图7为本公开实施例人眼视线与终端显示屏的位置关系示意图,参照图5和图7,基于眼睛中心凹E1和瞳孔中心点E2的位置,确定出过眼睛中心凹E1和瞳孔中心点E2的视线L;并根据已知的终端显示屏的位置,确定出视线L与终端显示屏的交点O的位置。7 is a schematic diagram showing the positional relationship between the human eye line of sight and the terminal display screen according to the embodiment of the present disclosure. Referring to FIG. 5 and FIG. 7, the eye center concave E1 and the pupil center point are determined based on the positions of the eye center concave E1 and the pupil center point E2. The line of sight L of E2; and based on the position of the known terminal display screen, determines the position of the intersection O of the line of sight L and the terminal display screen.
这里,可以根据终端上每个摄像头的空间位置、以及每个摄像头与终端显示屏的相对位置关系,确定终端显示屏的位置。Here, the position of the terminal display screen can be determined according to the spatial position of each camera on the terminal and the relative positional relationship between each camera and the display screen of the terminal.
这里,视线L与终端显示屏的交点O是终端显示屏上与所述参照物形成预设的映射关系的映射点;可选地,在本公开中,在确定视线L与终端显示屏的交点O的位置后,还可以在终端显示屏交点O的位置显示一个指示点,如此,便于直观地展示终端显示屏上与所述参照物形成预设的映射关系的映射点。Here, the intersection point O of the line of sight L and the terminal display screen is a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object; optionally, in the present disclosure, the intersection of the line of sight L and the terminal display screen is determined. After the position of O, an indication point can also be displayed at the position of the intersection of the terminal display screen. Thus, it is convenient to visually display the mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
在确定终端显示屏上与所述参照物形成预设的映射关系的映射点后,基于映射点生成所述映射点处的功能指令;生成的所述映射点处的功能指令可以用于:指示点击映射点处、指示长按映射点处或指示进行滑屏操作;滑屏操作的方向可以为向上、向下、向左或向右。 After determining a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object, generating a function instruction at the mapping point based on the mapping point; the generated function instruction at the mapping point may be used to: indicate Click on the map point, indicate long press at the map point or indicate to perform a sliding screen operation; the slide screen operation can be up, down, left or right.
本公开的实施例中,生成所述映射点处的功能指令的实现方式、基于所述功能指令实现对终端显示屏的操作的实现方式均已经在本公开上述实施例中进行了说明,这里不再赘述。In an embodiment of the present disclosure, an implementation manner of generating a function instruction at the mapping point, and an implementation manner of implementing an operation on a terminal display screen based on the function instruction have been described in the foregoing embodiment of the present disclosure, and Let me repeat.
为了能更加体现本公开的目的,在本公开上述实施例的基础上,进行进一步的举例说明。In order to further embodies the purpose of the present disclosure, further exemplifications are made on the basis of the above-described embodiments of the present disclosure.
在本公开的实施例中,参照物可以是人的一只眼睛;在终端显示屏所在面安装摄像头(前摄像头),利用前摄像头采集人眼睛中的图像,也就是要采集物体在人眼睛中的成像;可选地,在本公开中,还可以确定所采集的人眼睛中的图像的最清晰点,这里,可以将人眼睛中的图像的中心点确定为最清晰点。In an embodiment of the present disclosure, the reference object may be one eye of a person; a camera (front camera) is mounted on the side of the terminal display screen, and the image in the human eye is collected by using the front camera, that is, the object is collected in the human eye. Imaging; alternatively, in the present disclosure, the sharpest point of the image in the collected human eye can also be determined, where the center point of the image in the human eye can be determined as the sharpest point.
将终端显示屏当前显示内容中与所采集的人眼睛中的图像或参考图像匹配的区域记为:屏幕匹配区域;参考图像为所采集的人眼睛中的图像中包括最清晰点处的一个区域图像,例如,参考图像可以是以最清晰点处为圆心以1cm为半径的一个区域的图像。An area in the current display content of the terminal display that matches an image or a reference image in the collected human eye is recorded as: a screen matching area; the reference image is an area in the image of the collected human eye that includes the sharpest point The image, for example, the reference image may be an image of a region having a radius of 1 cm at the center of the sharpest point.
在屏幕匹配区域内确定一个点作为终端显示屏上与所述参照物形成预设的映射关系的映射点。例如,在以屏幕匹配区域中心点为圆心并以0.5cm为半径的圆形区域内确定一点作为上述映射点。A point is determined in the screen matching area as a mapping point on the terminal display screen to form a preset mapping relationship with the reference object. For example, a point is determined as a point of the above mapping in a circular area centered on the center point of the screen matching area and having a radius of 0.5 cm.
可选地,在本公开中,还可以确定上述映射点在屏幕坐标系的坐标。Alternatively, in the present disclosure, the coordinates of the above mapping point in the screen coordinate system may also be determined.
本公开的实施例中,生成所述映射点处的功能指令的实现方式、基于所述功能指令实现对终端显示屏的操作的实现方式均已经在本公开的上述实施例中进行了说明,这里不再赘述。In an embodiment of the present disclosure, an implementation manner of generating a function instruction at the mapping point, and an implementation manner of implementing an operation on a terminal display screen based on the function instruction have been described in the above embodiment of the present disclosure, where No longer.
为了能更加体现本公开的目的,在本公开上述实施例的基础上,进行进一步的举例说明。In order to further embodies the purpose of the present disclosure, further exemplifications are made on the basis of the above-described embodiments of the present disclosure.
在本公开的实施例中,参照物可以是人的两只眼睛;在终端显示屏所在面安装两个摄像头,这两个摄像头分别记为摄像头A5和摄像头B5,摄像头A5在三维坐标系的坐标为(xa5、ya5、za5),摄像头B5在三维坐标系的坐标为(xb5、yb5、zb5);可选地,在本公开中,可以配置摄像头A5和摄像头B5在同一个平面,两个摄像头所形成的平面与终端屏幕保持平行或重合。In the embodiment of the present disclosure, the reference object may be two eyes of the person; two cameras are installed on the surface of the terminal display screen, and the two cameras are respectively recorded as the camera A5 and the camera B5, and the coordinates of the camera A5 in the three-dimensional coordinate system. For (x a5 , y a5 , z a5 ), the coordinates of the camera B5 in the three-dimensional coordinate system are (x b5 , y b5 , z b5 ); alternatively, in the present disclosure, the camera A5 and the camera B5 can be configured in the same In a plane, the plane formed by the two cameras is parallel or coincident with the terminal screen.
每个摄像头可以分别采集两只眼睛的图像,之后利用图像识别或图像匹配技术,得到每只眼睛的瞳孔的图像,最后根据每只眼睛的瞳孔的图像确定每只眼睛的瞳孔中心点的图像;这里,每只眼睛的瞳孔中心点是参照点。Each camera can separately capture images of two eyes, and then use image recognition or image matching technology to obtain an image of the pupil of each eye, and finally determine an image of the pupil center point of each eye according to the image of the pupil of each eye; Here, the pupil center point of each eye is the reference point.
在摄像头A5和摄像头B5均采集到两只眼睛的瞳孔中心点的图像后,可以基于每个摄像头所采集的两只眼睛的瞳孔中心点的图像以及摄像头A5和摄像头B5的空间位置,确定两只眼睛的瞳孔中心点的空间位置;这里,可以将两只眼睛的瞳孔中心点分别记为C5点和D5点;显然,在确定两只眼睛的瞳孔中心点的空间位置后,可以得出每个摄像头到C5点的距离以及每个摄像头到D5点的距离。After the camera A5 and the camera B5 both collect images of the pupil center points of the two eyes, the two images of the pupil center points of the two eyes collected by each camera and the spatial positions of the camera A5 and the camera B5 can be determined. The spatial position of the pupil's center point of the eye; here, the pupil center points of the two eyes can be recorded as C5 and D5, respectively; obviously, after determining the spatial position of the pupil center point of both eyes, each can be drawn The distance from the camera to point C5 and the distance from each camera to point D5.
这里,确定两只眼睛的瞳孔中心点的空间位置的原理已经在本公开的上述实施例中作出说明,这里不再赘述。Here, the principle of determining the spatial position of the pupil center point of both eyes has been explained in the above embodiment of the present disclosure, and details are not described herein again.
在确定两只眼睛的瞳孔中心点的空间位置后,终端的处理器可以根据两只眼睛的瞳孔中心点的空间位置、以及终端显示屏的位置,确定两只眼睛的瞳孔中心点在终端显示屏上的投影点,其中,C5点在终端显示屏上的投影点记为E5点,D5点在终端显示屏上的投影点记 为F5点,E5点在屏幕坐标系的坐标为(XA5、YA5),F5点在屏幕坐标系的坐标为(XB5、YB5)。After determining the spatial position of the pupil center point of the two eyes, the processor of the terminal can determine the pupil center point of the two eyes on the terminal display according to the spatial position of the pupil center point of the two eyes and the position of the terminal display screen. The projection point on the top, where the projection point of the C5 point on the terminal display is recorded as E5 point, the projection point of the D5 point on the terminal display is recorded as F5 point, and the coordinate of the E5 point in the screen coordinate system is (X A5 , Y A5 ), the coordinates of the F5 point in the screen coordinate system are (X B5 , Y B5 ).
这里,可以根据终端上摄像头A5和摄像头B5的空间位置、以及摄像头A5和摄像头B5与终端显示屏的相对位置关系,确定终端显示屏的位置。Here, the position of the terminal display screen can be determined according to the spatial position of the camera A5 and the camera B5 on the terminal, and the relative positional relationship between the camera A5 and the camera B5 and the terminal display screen.
在确定两只眼睛的瞳孔中心点在终端显示屏上的投影点后,可以确定两只眼睛的瞳孔中心点在终端显示屏上的投影点的连线的中点,将所确定的连线的中点作为:终端显示屏上与参照物形成预设的映射关系的映射点。After determining the projection point of the pupil center point of the two eyes on the terminal display screen, the midpoint of the line connecting the projection center points of the two eyes on the terminal display screen can be determined, and the determined connection is determined. The midpoint is a mapping point on the terminal display that forms a preset mapping relationship with the reference object.
在实际实施时,可以根据E5点和F5点在屏幕坐标系的坐标,确定E5点和F5点的连线的中点O5点,这里,O5点为终端显示屏上与参照物形成预设的映射关系的映射点,O5点在屏幕坐标系的坐标记为(XO5,YO5)。In actual implementation, the midpoint O5 of the line connecting E5 and F5 can be determined according to the coordinates of the E5 point and the F5 point in the screen coordinate system. Here, the O5 point is a preset on the terminal display screen and the reference object. The mapping point of the mapping relationship, the sitting point of the O5 point in the screen coordinate system is (X O5 , Y O5 ).
本公开的实施例中,生成所述映射点处的功能指令的实现方式、基于所述功能指令实现对终端显示屏的操作的实现方式均已经在本公开的上述实施例中进行了说明,这里不再赘述。In an embodiment of the present disclosure, an implementation manner of generating a function instruction at the mapping point, and an implementation manner of implementing an operation on a terminal display screen based on the function instruction have been described in the above embodiment of the present disclosure, where No longer.
为了能更加体现本公开的目的,在本公开的上述实施例的基础上,进行进一步的举例说明。Further exemplifications are made on the basis of the above-described embodiments of the present disclosure in order to further embody the purpose of the present disclosure.
在本公开的实施例中,在终端上安装有一个前摄像头和一个后摄像头,前摄像头用于实时采集参照物A的图像,后摄像头用于实时采集参照物B的图像;In the embodiment of the present disclosure, a front camera and a rear camera are installed on the terminal, the front camera is used to collect the image of the reference object A in real time, and the rear camera is used to collect the image of the reference object B in real time;
基于本公开的上述实施例,对参照物A的图像进行分析前,需要实时获取前摄像头与参照物A的距离;在对参照物B的图像进行分析前,需要实时获取后摄像头与参照物B的距离。Based on the above embodiment of the present disclosure, before analyzing the image of the reference object A, it is necessary to acquire the distance between the front camera and the reference object A in real time; before analyzing the image of the reference object B, it is necessary to acquire the rear camera and the reference object B in real time. the distance.
在从参照物A中选取参照点后,可以基于参照点的图像、前摄像头的空间位置以及前摄像头与参照物A的距离,确定参照点的空间位置;同理,在从参照物B中选取参照点后,可以基于参照点的图像、后摄像头的空间位置以及后摄像头与参照物B的距离,确定参照点的空间位置。After the reference point is selected from the reference object A, the spatial position of the reference point may be determined based on the image of the reference point, the spatial position of the front camera, and the distance between the front camera and the reference object A; similarly, the reference object B is selected. After the reference point, the spatial position of the reference point can be determined based on the image of the reference point, the spatial position of the rear camera, and the distance between the rear camera and the reference object B.
例如,从参照物A中选取的参照点表示为点A6,从参照物B中选取的参照点表示为点B6,点A6的空间位置的坐标可以记为(XA6、YA6、ZA6),点B6的空间位置的坐标可以记为(XB6、YB6、ZB6)。For example, the reference point selected from the reference object A is represented as the point A6, and the reference point selected from the reference object B is represented as the point B6, and the coordinates of the spatial position of the point A6 can be recorded as (X A6 , Y A6 , Z A6 ) The coordinates of the spatial position of the point B6 can be written as (X B6 , Y B6 , Z B6 ).
在确定参照点之后,确定每个参照点在终端显示屏上的投影点的实现方式、基于所述投影点确定与所述参照物形成预设的映射关系的映射点的实现方式,已经在本公开的上述实施例中作出说明,这里不再赘述。After determining the reference point, determining an implementation manner of a projection point of each reference point on the terminal display screen, and determining, according to the projection point, a mapping point that forms a preset mapping relationship with the reference object, The above embodiments are disclosed in the above description, and are not described herein again.
需要说明的是,终端上可以设置有前摄像头和后摄像头的切换开关,用于选择启动前摄像头或后摄像头,如此,用户可以根据需要选取的参照物与终端的相对位置关系,确定启动前摄像头,还是启动后摄像头。It should be noted that the terminal can be provided with a front camera and a rear camera switch for selecting the front camera or the rear camera. Thus, the user can determine the relative position relationship between the reference object and the terminal according to the need to determine the camera before starting. , or after starting the camera.
本公开的实施例中,生成所述映射点处的功能指令的实现方式、基于所述功能指令实现对终端显示屏的操作的实现方式均已经在本公开的上述实施例中进行了说明,这里不再赘述。 In an embodiment of the present disclosure, an implementation manner of generating a function instruction at the mapping point, and an implementation manner of implementing an operation on a terminal display screen based on the function instruction have been described in the above embodiment of the present disclosure, where No longer.
在本公开的上述实施例的基础上,本公开实施例还提出了一种终端。Based on the above embodiments of the present disclosure, an embodiment of the present disclosure also proposes a terminal.
图8为本公开实施例的终端的结构示意图,如图8所示,该终端800包括:图像采集装置801和处理器802;其中,FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure. As shown in FIG. 8, the terminal 800 includes: an image collection device 801 and a processor 802.
图像采集装置801,用于实时采集参照物的图像,所述参照物与终端显示屏的距离超过设定值;The image capturing device 801 is configured to collect an image of the reference object in real time, and the distance between the reference object and the display screen of the terminal exceeds a set value;
处理器802,用于对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点;生成所述映射点处的功能指令,基于所述功能指令实现对终端显示屏的操作。The processor 802 is configured to analyze an image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, and generate a function instruction at the mapping point, based on the The function instructions implement the operation of the terminal display.
所述参照物包括人的瞳孔;The reference object includes a pupil of a person;
所述处理器802,还用于实时获取眼睛中心凹的空间位置;所述瞳孔和所述眼睛中心凹位于同一只眼睛;The processor 802 is further configured to acquire a spatial position of the fovea of the eye in real time; the pupil and the fovea of the eye are located in the same eye;
相应地,所述处理器802,用于基于所述瞳孔的图像,得出瞳孔中心点的空间位置;基于所述眼睛中心凹和瞳孔中心点的空间位置,将经过眼睛中心凹和瞳孔中心点的直线与终端显示屏的交点确定为:终端显示屏上与所述参照物形成预设的映射关系的映射点。Correspondingly, the processor 802 is configured to obtain a spatial position of the pupil center point based on the image of the pupil; based on the spatial position of the eye fovea and the pupil center point, passing through the center fovea and the pupil center point The intersection of the straight line and the terminal display screen is determined as a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
所述图像采集装置801包括至少一个摄像头;The image capture device 801 includes at least one camera;
所述处理器802,用于在所述参照物的图像中选取至少一点作为参照点;基于每个参照点的图像和每个摄像头的空间位置,确定每个参照点的空间位置;基于每个参照点的空间位置和所述终端显示屏的空间位置,确定每个参照点在终端显示屏上的投影点;基于所述投影点确定与所述参照物形成预设的映射关系的映射点。The processor 802 is configured to select at least one point in the image of the reference object as a reference point; determine a spatial position of each reference point based on an image of each reference point and a spatial position of each camera; Determining a projection point of each reference point on the terminal display screen according to a spatial position of the reference point and a spatial position of the terminal display screen; and determining a mapping point that forms a preset mapping relationship with the reference object based on the projection point.
所述参照物包括人的一只眼睛;The reference object includes one eye of a person;
所述图像采集装置801,用于实时采集人眼睛中的图像;The image capturing device 801 is configured to collect images in a human eye in real time;
所述处理器802,用于将终端显示屏当前显示内容中与所采集的人眼睛中的图像匹配的区域确定为:屏幕匹配区域;在所述屏幕匹配区域内选取一点作为与所述参照物形成预设的映射关系的映射点。The processor 802 is configured to determine, in a current display content of the terminal display screen, an area that matches an image in the collected human eye as: a screen matching area; and select a point in the screen matching area as the reference object A mapping point that forms a preset mapping relationship.
可选地,在本公开中,所述处理器802,还用于在对所述参照物的图像进行分析之前,确定终端显示屏与所述参照物的距离;Optionally, in the present disclosure, the processor 802 is further configured to determine a distance between the terminal display screen and the reference object before analyzing the image of the reference object;
所述处理器802,用于在所确定的距离处于设定区间时,对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点;在确定所获取的距离不处于设定区间时,直接结束流程。The processor 802 is configured to analyze an image of the reference object when the determined distance is in a set interval, and obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object; When it is determined that the acquired distance is not within the set interval, the flow is directly ended.
所述处理器802,用于确定所述映射点处于设置区域的时间;基于所确定的时间的大小,生成所述映射点处的功能指令。The processor 802 is configured to determine a time when the mapping point is in a setting area; and generate a function instruction at the mapping point based on the determined size of the time.
可选地,在本公开中,所述图像采集装置801,还用于在生成所述映射点处的功能指令前,对所述终端的用户的动作连续进行图像采集,得到用户的动作图像;Optionally, in the disclosure, the image collection device 801 is further configured to continuously perform image collection on the action of the user of the terminal before generating the function instruction at the mapping point, to obtain an action image of the user;
所述处理器802,还用于对所述用户的动作图像进行图像识别,得到识别结果;基于所述识别结果生成所述映射点处的功能指令。The processor 802 is further configured to perform image recognition on the motion image of the user to obtain a recognition result, and generate a function instruction at the mapping point based on the recognition result.
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括 指令的存储器,上述指令可由终端的处理器执行以完成上述方法。例如,上述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium comprising instructions, for example comprising The memory of the instructions, which may be executed by a processor of the terminal to perform the above method. For example, the non-transitory computer readable storage medium described above may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
本领域内的技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present disclosure can be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware aspects. Moreover, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
以上所述,仅为本公开的较佳实施例而已,并非用于限定本公开的保护范围。The above description is only for the preferred embodiments of the present disclosure, and is not intended to limit the scope of the disclosure.
工业实用性Industrial applicability
本公开适用于人机交互领域,用以在不需要用手指触摸屏幕的情况下,仅根据参照物图像就可以操作终端;对于当前存在由终端显示屏尺寸变大导致的手指操作终端不方便的技术问题,本发明能够有效提高人机交互效率,提高了终端的易操作性,并提升了用户体验。 The present disclosure is applicable to the field of human-computer interaction, and can operate the terminal only according to the reference object image without touching the screen with a finger; it is inconvenient for the finger operation terminal that is currently caused by the size of the terminal display screen becoming large. The technical problem is that the invention can effectively improve the efficiency of human-computer interaction, improve the operability of the terminal, and improve the user experience.

Claims (30)

  1. 一种操作终端的方法,包括:A method of operating a terminal, comprising:
    实时采集参照物的图像,所述参照物与终端显示屏的距离超过设定值;Acquiring an image of the reference object in real time, the distance between the reference object and the display screen of the terminal exceeds a set value;
    对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点;Performing an analysis on the image of the reference object to obtain a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object;
    生成所述映射点处的功能指令,基于所述功能指令实现对终端显示屏的操作。Generating a function instruction at the mapping point, and implementing an operation on a terminal display screen based on the function instruction.
  2. 根据权利要求1所述的方法,其中,所述参照物包括人的瞳孔;The method of claim 1 wherein said reference comprises a pupil of a person;
    相应地,所述对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点,包括:Correspondingly, the image of the reference object is analyzed to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including:
    实时获取眼睛中心凹的空间位置,所述瞳孔和所述眼睛中心凹位于同一只眼睛;基于所述瞳孔的图像,得出瞳孔中心点的空间位置;Obtaining a spatial position of the fovea of the eye in real time, the pupil and the fovea of the eye being located in the same eye; and based on the image of the pupil, obtaining a spatial position of the central point of the pupil;
    基于所述眼睛中心凹和瞳孔中心点的空间位置,将经过眼睛中心凹和瞳孔中心点的直线与终端显示屏的交点确定为终端显示屏上与所述参照物形成预设的映射关系的映射点。Based on the spatial position of the fovea of the eye and the center point of the pupil, the intersection of the line passing through the center point of the eye and the center point of the pupil is determined as a mapping relationship between the terminal display and the reference object. point.
  3. 根据权利要求2所述的方法,其中,所述实时采集参照物的图像,包括:利用双摄像头分别采集所述瞳孔的图像;The method of claim 2, wherein the capturing an image of the reference object in real time comprises: separately acquiring an image of the pupil using a dual camera;
    所述基于所述瞳孔的图像,得出瞳孔中心点的空间位置,包括:基于所述两个摄像头的空间位置、以及所述两个摄像头所采集的图像,得出瞳孔中心点的空间位置。The obtaining a spatial position of the pupil center point based on the image of the pupil includes: obtaining a spatial position of the pupil center point based on a spatial position of the two cameras and an image acquired by the two cameras.
  4. 根据权利要求1所述的方法,其中,所述实时采集参照物的图像,包括:利用至少一个摄像头实时采集所述参照物的图像;The method of claim 1, wherein the capturing an image of the reference object in real time comprises: acquiring an image of the reference object in real time using at least one camera;
    相应地,所述对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点,包括:Correspondingly, the image of the reference object is analyzed to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including:
    在所述参照物的图像中选取至少一点作为参照点;Selecting at least one point in the image of the reference object as a reference point;
    基于每个参照点的图像和每个摄像头的空间位置,确定每个参照点的空间位置;Determining the spatial position of each reference point based on the image of each reference point and the spatial position of each camera;
    基于每个参照点的空间位置和所述终端显示屏的空间位置,确定每个参照点在终端显示屏上的投影点;Determining a projection point of each reference point on the terminal display screen based on a spatial position of each reference point and a spatial position of the terminal display screen;
    基于所述投影点确定与所述参照物形成预设的映射关系的映射点。A mapping point that forms a preset mapping relationship with the reference object is determined based on the projection point.
  5. 根据权利要求4所述的方法,其中,所述基于所述投影点确定与所述参照物形成预设的映射关系的映射点,包括:将所述投影点作为与所述参照物形成预设的映射关系的映射点。The method according to claim 4, wherein the determining, based on the projection point, a mapping point that forms a preset mapping relationship with the reference object comprises: forming the projection point as a preset with the reference object The mapping point of the mapping relationship.
  6. 根据权利要求4所述的方法,其中,所述参照点的个数为2个;The method according to claim 4, wherein the number of the reference points is two;
    所述基于所述投影点确定与所述参照物形成预设的映射关系的映射点,包括:确定两个参照点在终端显示屏上的投影点,将确定的两个投影点的连线的中点作为与所述参照物形成预设的映射关系的映射点。Determining, according to the projection point, a mapping point that forms a preset mapping relationship with the reference object, comprising: determining a projection point of the two reference points on the display screen of the terminal, and determining the connection of the two projection points The midpoint serves as a mapping point that forms a preset mapping relationship with the reference object.
  7. 根据权利要求6所述的方法,其中,所述参照物包括人的两只眼睛;The method of claim 6 wherein said reference comprises two eyes of a person;
    所述在所述参照物的图像中选取至少一点作为参照点,包括:将所述人的两只眼睛的 瞳孔中心点分别作为参照点。Selecting at least one point in the image of the reference object as a reference point includes: placing the two eyes of the person The pupil center points are used as reference points, respectively.
  8. 根据权利要求4所述的方法,其中,所述在所述参照物的图像中选取至少一点作为参照点,包括:The method according to claim 4, wherein said selecting at least one point in the image of said reference object as a reference point comprises:
    基于所述参照物的图像和每个摄像头的空间位置,确定所述参照物的空间位置;Determining a spatial location of the reference object based on an image of the reference object and a spatial location of each camera;
    基于所述参照物的空间位置,将所述参照物中与终端显示屏垂直距离最小的一点作为参照点。Based on the spatial position of the reference object, a point at which the vertical distance from the terminal display screen in the reference object is the smallest is used as a reference point.
  9. 根据权利要求4所述的方法,其中,所述实时采集参照物的图像,包括:利用一个摄像头实时采集所述参照物的图像;The method of claim 4, wherein the capturing an image of the reference object in real time comprises: acquiring an image of the reference object in real time using a camera;
    在对所述参照物的图像进行分析前,所述方法还包括:实时获取所述摄像头与所述参照物的距离;Before analyzing the image of the reference object, the method further includes: acquiring a distance between the camera and the reference object in real time;
    相应地,所述基于所述参照点的图像和每个摄像头的空间位置,确定所述参照点的空间位置,包括:Correspondingly, determining the spatial location of the reference point based on the image of the reference point and the spatial position of each camera comprises:
    基于所述参照点的图像、所述摄像头的空间位置以及所述摄像头与所述参照物的距离,确定所述参照点的空间位置。A spatial position of the reference point is determined based on an image of the reference point, a spatial position of the camera, and a distance between the camera and the reference object.
  10. 根据权利要求4所述的方法,其中,所述参照物为与所述终端显示屏的垂直距离最小的物体,或者,所述参照物位于人体。The method of claim 4, wherein the reference object is an object having a minimum vertical distance from the display screen of the terminal, or the reference object is located in a human body.
  11. 根据权利要求1所述的方法,其中,所述参照物包括人的一只眼睛;所述实时采集参照物的图像,包括:实时采集人眼睛中的图像;The method according to claim 1, wherein the reference object comprises one eye of a person; the real-time acquisition of an image of the reference object comprises: collecting an image in a human eye in real time;
    所述对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点,包括:将终端显示屏当前显示内容中与所采集的人眼睛中的图像匹配的区域确定为:屏幕匹配区域;在所述屏幕匹配区域内选取一点作为与所述参照物形成预设的映射关系的映射点。The analyzing the image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including: displaying the current display content of the terminal display screen and the collected human eyes The image matching area is determined as: a screen matching area; a point is selected in the screen matching area as a mapping point that forms a preset mapping relationship with the reference object.
  12. 根据权利要求1所述的方法,其中,在对所述参照物的图像进行分析之前,所述方法还包括:确定终端显示屏与所述参照物的距离;The method of claim 1, wherein prior to analyzing the image of the reference object, the method further comprises: determining a distance between the terminal display screen and the reference object;
    所述对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点,包括:所确定的距离处于设定区间时,对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点。The analyzing the image of the reference object to obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object, including: when the determined distance is in the set interval, the reference object The image is analyzed to obtain a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
  13. 根据权利要求1所述的方法,其中,所述生成所述映射点处的功能指令包括:确定所述映射点处于终端显示屏上映射点区域的时间;基于所确定的时间的大小,生成所述映射点处的功能指令;所述映射点区域包括所述终端显示屏上的映射点的初始位置。The method of claim 1, wherein the generating the function instruction at the mapping point comprises: determining a time at which the mapping point is in a mapping point area on a terminal display screen; generating a location based on the determined size of time a function instruction at a mapping point; the mapping point area includes an initial position of a mapping point on the display screen of the terminal.
  14. 根据权利要求13所述的方法,其中,所述基于所确定的时间的大小,生成映射点处的功能指令,包括:The method of claim 13, wherein the generating the functional instructions at the mapping point based on the determined size of the time comprises:
    所确定的时间处于第一设定范围时,生成指示点击当前映射点处的指令;所确定的时间处于第二设定范围时,生成指示长按当前映射点处的指令;所确定的时间处于第三设定范围时,生成指示进行滑屏操作的指令; When the determined time is in the first setting range, generating an instruction indicating that the current mapping point is clicked; when the determined time is in the second setting range, generating an instruction indicating that the current mapping point is long pressed; the determined time is at When the third setting range is set, an instruction indicating that the sliding operation is performed is generated;
    所述第一设定范围、第二设定范围和第三设定范围两两之间不形成重叠。No overlap is formed between the first setting range, the second setting range, and the third setting range.
  15. 根据权利要求14所述的方法,其中,所述第一设定范围是从第一时间阈值到第二时间阈值的范围;所述第二设定范围为大于第二时间阈值;所述第三设定范围为小于第一时间阈值;所述第一时间阈值小于第二时间阈值。The method of claim 14, wherein the first set range is a range from a first time threshold to a second time threshold; the second set range is greater than a second time threshold; the third The setting range is less than the first time threshold; the first time threshold is less than the second time threshold.
  16. 根据权利要求14所述的方法,其中,所述生成指示进行滑屏操作的指令,包括:获取映射点的移动方向和移动速率,基于所述映射点的移动方向和移动速率,生成指示进行滑屏操作的指令。The method according to claim 14, wherein the generating an instruction to perform a sliding operation comprises: acquiring a moving direction and a moving rate of the mapping point, generating an indication to slide based on the moving direction and the moving speed of the mapping point Instructions for screen operations.
  17. 根据权利要求16所述的方法,其中,所述基于所述映射点的移动方向和移动速率,生成指示进行滑屏操作的指令包括:The method according to claim 16, wherein the generating an instruction to perform a sliding screen operation based on a moving direction and a moving rate of the mapping point comprises:
    将映射点在移动终端显示屏的横向方向的移动速率作为映射点的横向移动速率,将映射点在移动终端显示屏的纵向方向的移动速率作为映射点的纵向移动速率;The moving rate of the mapping point in the lateral direction of the display screen of the mobile terminal is used as the lateral moving rate of the mapping point, and the moving rate of the mapping point in the longitudinal direction of the display screen of the mobile terminal is taken as the longitudinal moving speed of the mapping point;
    所述映射点的横向移动速率大于映射点的纵向移动速率时,生成指示进行横向滑屏操作的指令;或者,所述映射点的横向移动速率大于映射点的纵向移动速率,且映射点的横向移动速率满足第一设定条件时,生成指示进行横向滑屏操作的指令;When the lateral movement rate of the mapping point is greater than the longitudinal movement rate of the mapping point, generating an instruction indicating that the horizontal sliding operation is performed; or, the lateral movement rate of the mapping point is greater than the longitudinal moving speed of the mapping point, and the horizontal direction of the mapping point When the moving rate satisfies the first setting condition, generating an instruction indicating that the horizontal sliding operation is performed;
    所述映射点的纵向移动速率大于映射点的横向移动速率时,生成指示进行纵向滑屏操作的指令;或者,所述映射点的纵向移动速率大于映射点的横向移动速率,且映射点的纵向移动速率满足第二设定条件时,生成指示进行纵向滑屏操作的指令。When the longitudinal movement rate of the mapping point is greater than the lateral movement rate of the mapping point, generating an instruction indicating that the vertical sliding operation is performed; or, the longitudinal movement rate of the mapping point is greater than the lateral movement rate of the mapping point, and the vertical direction of the mapping point When the movement rate satisfies the second setting condition, an instruction is instructed to perform the vertical sliding operation.
  18. 根据权利要求17所述的方法,其中,所述第一设定条件为:映射点的横向移动速率在第四设定范围内;所述第二设定条件为:映射点的纵向移动速率在第五设定范围内。The method according to claim 17, wherein the first setting condition is that a lateral movement rate of the mapping point is within a fourth setting range; and the second setting condition is that a longitudinal movement speed of the mapping point is Within the fifth setting range.
  19. 根据权利要求13所述的方法,其中,所述映射点区域包括所述终端显示屏上的映射点的初始位置;所述映射点区域的面积小于等于设定门限。The method according to claim 13, wherein the mapping point area comprises an initial position of a mapping point on the display screen of the terminal; and an area of the mapping point area is less than or equal to a set threshold.
  20. 根据权利要求19所述的方法,其中,所述映射点区域是以映射点的初始位置为圆心的一个圆形区域。The method of claim 19, wherein the map point area is a circular area centered on an initial position of the map point.
  21. 根据权利要求1所述的方法,其中,在生成所述映射点处的功能指令前,所述方法还包括:对所述终端的用户的动作连续进行图像采集,得到用户的动作图像;对所述用户的动作图像进行图像识别,得到识别结果;The method according to claim 1, wherein before the generating the function instruction at the mapping point, the method further comprises: continuously performing image acquisition on the action of the user of the terminal to obtain an action image of the user; Performing image recognition on the motion image of the user to obtain a recognition result;
    相应地,所述生成所述映射点处的功能指令包括:基于所述识别结果生成所述映射点处的功能指令。Correspondingly, the generating the function instruction at the mapping point comprises: generating a function instruction at the mapping point based on the recognition result.
  22. 根据权利要求21所述的方法,其中,所述基于所述识别结果生成所述映射点处的功能指令,包括:所述识别结果为眨眼动作、嘴张开动作或嘴闭合动作时,生成指示点击映射点处的指令;所述识别结果为点头动作时,生成指示进行向下滑屏操作的指令;所述识别结果为抬头动作时,生成指示进行向上滑屏操作的指令;所述识别结果为左右摇头动作时,生成指示进行横向滑屏操作的指令。The method according to claim 21, wherein the generating the function instruction at the mapping point based on the recognition result comprises: generating an indication when the recognition result is a blink action, a mouth open action or a mouth closing action Clicking an instruction at the mapping point; when the recognition result is a nodding action, generating an instruction to perform a downward screen operation; and when the recognition result is a head-up operation, generating an instruction to perform an upward sliding operation; the recognition result is When the left and right shaking action is performed, an instruction to instruct a lateral sliding operation is generated.
  23. 根据权利要求1所述的方法,其中,所述映射点处的功能指令是:指示点击当前映射点处的功能指令、指示长按当前映射点处的功能指令或指示进行滑屏操作的指令。 The method of claim 1, wherein the function instruction at the mapping point is: an instruction to click a function instruction at a current mapping point, a function instruction to press and hold a current mapping point, or an instruction to perform a sliding operation.
  24. 一种终端,包括:A terminal comprising:
    图像采集装置,设置为实时采集参照物的图像,所述参照物与终端显示屏的距离超过设定值;The image capturing device is configured to collect an image of the reference object in real time, and the distance between the reference object and the display screen of the terminal exceeds a set value;
    处理器,设置为对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点;生成所述映射点处的功能指令,基于所述功能指令实现对终端显示屏的操作。a processor configured to analyze an image of the reference object to obtain a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object; generate a function instruction at the mapping point, based on the function The instructions implement the operation of the terminal display.
  25. 根据权利要求24所述的终端,其中,所述参照物包括人的瞳孔;The terminal of claim 24, wherein the reference object comprises a pupil of a person;
    所述处理器还设置为实时获取眼睛中心凹的空间位置;所述瞳孔和所述眼睛中心凹位于同一只眼睛;The processor is further configured to acquire a spatial position of the fovea of the eye in real time; the pupil and the fovea of the eye are located in the same eye;
    相应地,所述处理器设置为基于所述瞳孔的图像,得出瞳孔中心点的空间位置;基于所述眼睛中心凹和瞳孔中心点的空间位置,将经过眼睛中心凹和瞳孔中心点的直线与终端显示屏的交点确定为:终端显示屏上与所述参照物形成预设的映射关系的映射点。Correspondingly, the processor is arranged to derive a spatial position of the pupil center point based on the image of the pupil; a straight line passing through the center of the eye and the center of the pupil based on the spatial position of the central fovea and the pupil center point The intersection with the terminal display screen is determined as a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object.
  26. 根据权利要求24所述的终端,其中,所述图像采集装置包括至少一个摄像头;The terminal of claim 24, wherein said image acquisition device comprises at least one camera;
    所述处理器设置为在所述参照物的图像中选取至少一点作为参照点;基于每个参照点的图像和每个摄像头的空间位置,确定每个参照点的空间位置;基于每个参照点的空间位置和所述终端显示屏的空间位置,确定每个参照点在终端显示屏上的投影点;基于所述投影点确定与所述参照物形成预设的映射关系的映射点。The processor is configured to select at least one point in the image of the reference object as a reference point; determine a spatial position of each reference point based on the image of each reference point and the spatial position of each camera; based on each reference point a spatial position and a spatial position of the terminal display screen, determining a projection point of each reference point on the terminal display screen; and determining a mapping point that forms a preset mapping relationship with the reference object based on the projection point.
  27. 根据权利要求24所述的终端,其中,所述参照物包括人的一只眼睛;The terminal of claim 24, wherein the reference object comprises one eye of a person;
    所述图像采集装置设置为实时采集人眼睛中的图像;The image capture device is configured to collect images in a human eye in real time;
    所述处理器设置为将终端显示屏当前显示内容中与所采集的人眼睛中的图像匹配的区域确定为:屏幕匹配区域;在所述屏幕匹配区域内选取一点作为与所述参照物形成预设的映射关系的映射点。The processor is configured to determine an area in the current display content of the terminal display screen that matches an image in the collected human eye as: a screen matching area; and select a point in the screen matching area as a pre-form with the reference object The mapping point of the mapping relationship.
  28. 根据权利要求24所述的终端,其中,所述处理器还设置为在对所述参照物的图像进行分析之前,确定终端显示屏与所述参照物的距离;The terminal of claim 24, wherein the processor is further configured to determine a distance between the terminal display screen and the reference object prior to analyzing the image of the reference object;
    所述处理器设置为在所确定的距离处于设定区间时,对所述参照物的图像进行分析,得出终端显示屏上与所述参照物形成预设的映射关系的映射点。The processor is configured to analyze the image of the reference object when the determined distance is in the set interval, and obtain a mapping point on the display screen of the terminal that forms a preset mapping relationship with the reference object.
  29. 根据权利要求24所述的终端,其中,所述处理器,设置为确定所述映射点处于设置区域的时间;基于所确定的时间的大小,生成所述映射点处的功能指令。The terminal of claim 24, wherein the processor is configured to determine a time at which the mapping point is in a set region; and to generate a function instruction at the mapping point based on the determined size of the time.
  30. 根据权利要求24所述的终端,其中,所述图像采集装置还设置为在生成所述映射点处的功能指令前,对所述终端的用户的动作连续进行图像采集,得到用户的动作图像;The terminal according to claim 24, wherein the image collection device is further configured to continuously perform image acquisition on the action of the user of the terminal to generate an action image of the user before generating the function instruction at the mapping point;
    所述处理器还设置为对所述用户的动作图像进行图像识别,得到识别结果;基于所述识别结果生成所述映射点处的功能指令。 The processor is further configured to perform image recognition on the motion image of the user to obtain a recognition result; and generate a function instruction at the mapping point based on the recognition result.
PCT/CN2017/078581 2016-10-27 2017-03-29 Terminal and method for operating terminal WO2018076609A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610959809.6 2016-10-27
CN201610959809.6A CN108008811A (en) 2016-10-27 2016-10-27 A kind of method and terminal using non-touch screen mode operating terminal

Publications (1)

Publication Number Publication Date
WO2018076609A1 true WO2018076609A1 (en) 2018-05-03

Family

ID=62024412

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/078581 WO2018076609A1 (en) 2016-10-27 2017-03-29 Terminal and method for operating terminal

Country Status (2)

Country Link
CN (1) CN108008811A (en)
WO (1) WO2018076609A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625210A (en) * 2019-02-27 2020-09-04 杭州海康威视系统技术有限公司 Large screen control method, device and equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111752381A (en) * 2019-05-23 2020-10-09 北京京东尚科信息技术有限公司 Man-machine interaction method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662473A (en) * 2012-04-16 2012-09-12 广东步步高电子工业有限公司 Device and method for implementation of man-machine information interaction based on eye motion recognition
CN104471511A (en) * 2012-03-13 2015-03-25 视力移动技术有限公司 Touch free user interface

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441513B (en) * 2008-11-26 2010-08-11 北京科技大学 System for performing non-contact type human-machine interaction by vision
CN101901485B (en) * 2010-08-11 2014-12-03 华中科技大学 3D free head moving type gaze tracking system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104471511A (en) * 2012-03-13 2015-03-25 视力移动技术有限公司 Touch free user interface
CN102662473A (en) * 2012-04-16 2012-09-12 广东步步高电子工业有限公司 Device and method for implementation of man-machine information interaction based on eye motion recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625210A (en) * 2019-02-27 2020-09-04 杭州海康威视系统技术有限公司 Large screen control method, device and equipment
CN111625210B (en) * 2019-02-27 2023-08-04 杭州海康威视系统技术有限公司 Large screen control method, device and equipment

Also Published As

Publication number Publication date
CN108008811A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
JP6057396B2 (en) 3D user interface device and 3D operation processing method
CN102662577B (en) A kind of cursor operating method based on three dimensional display and mobile terminal
KR102517425B1 (en) Systems and methods of direct pointing detection for interaction with a digital device
US10179407B2 (en) Dynamic multi-sensor and multi-robot interface system
US9651782B2 (en) Wearable tracking device
CN103336575B (en) The intelligent glasses system of a kind of man-machine interaction and exchange method
CN107390863B (en) Device control method and device, electronic device and storage medium
CN104298340B (en) Control method and electronic equipment
KR20120068253A (en) Method and apparatus for providing response of user interface
KR101892735B1 (en) Apparatus and Method for Intuitive Interaction
US10607340B2 (en) Remote image transmission system, display apparatus, and guide displaying method thereof
WO2013149475A1 (en) User interface control method and device
KR20150094680A (en) Target and press natural user input
KR20170062439A (en) Control device, control method, and program
WO2021073743A1 (en) Determining user input based on hand gestures and eye tracking
KR101396488B1 (en) Apparatus for signal input and method thereof
US20200326783A1 (en) Head mounted display device and operating method thereof
WO2020080107A1 (en) Information processing device, information processing method, and program
WO2018076609A1 (en) Terminal and method for operating terminal
CN109144262B (en) Human-computer interaction method, device, equipment and storage medium based on eye movement
CN103713387A (en) Electronic device and acquisition method
US20130187890A1 (en) User interface apparatus and method for 3d space-touch using multiple imaging sensors
KR20160055407A (en) Holography touch method and Projector touch method
KR20200120467A (en) Head mounted display apparatus and operating method thereof
CN106095088B (en) A kind of electronic equipment and its image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17864731

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17864731

Country of ref document: EP

Kind code of ref document: A1