WO2013149475A1 - 一种用户界面的控制方法及装置 - Google Patents

一种用户界面的控制方法及装置 Download PDF

Info

Publication number
WO2013149475A1
WO2013149475A1 PCT/CN2012/086000 CN2012086000W WO2013149475A1 WO 2013149475 A1 WO2013149475 A1 WO 2013149475A1 CN 2012086000 W CN2012086000 W CN 2012086000W WO 2013149475 A1 WO2013149475 A1 WO 2013149475A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
user interface
dimensional
hand
palm surface
Prior art date
Application number
PCT/CN2012/086000
Other languages
English (en)
French (fr)
Inventor
王晓晖
赵健章
于洋
Original Assignee
深圳创维数字技术股份有限公司
深圳市创维软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维数字技术股份有限公司, 深圳市创维软件有限公司 filed Critical 深圳创维数字技术股份有限公司
Publication of WO2013149475A1 publication Critical patent/WO2013149475A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the present invention relates to the field of communications, and in particular, to a method and apparatus for controlling a user interface. Background technique
  • gesture control devices are roughly divided into two categories: One is to use a camera to recognize gestures. This type of device needs to recognize whether the projection of the hand conforms to a specific shape (one instruction for each specific shape); the other is A sensor worn by the user is required to recognize the gesture action.
  • the commonality of these two types of devices is that the gestures representing the control commands are fixed and single, and the user needs to memorize the control commands corresponding to various gestures. When there are too many instructions, it is easy to be confused, and the user experience is not good. Summary of the invention
  • a technical problem to be solved by embodiments of the present invention is to provide a method and apparatus for controlling a user interface. It allows users to control the user interface flexibly and conveniently, improving the user experience.
  • an embodiment of the present invention provides a method for controlling a user interface, including:
  • the three-dimensional spatial information for collecting the user's hand motion includes:
  • the spatial coordinates of the points on the user's palm surface in the reference coordinate system are periodically acquired.
  • the three-dimensional virtual hand that is consistent with the user's hand motion on the user interface according to the collected three-dimensional spatial information includes:
  • the user's palm surface motion is constructed. Consistent virtual palm surface
  • a virtual hand back surface that conforms to the back movement of the user's hand is constructed.
  • the operation of the three-dimensional virtual hand on the user interface includes: at least one of an operation of rotating the user interface, an operation of dragging the user interface, an operation of reducing the user interface, and an operation of enlarging the user interface. item.
  • the user interface further includes: at least one three-dimensional function option;
  • the operation of the three-dimensional virtual hand-to-user interface further includes: an operation for requesting execution of a function corresponding to the function option.
  • the embodiment of the present invention further provides a user interface control device, including: an acquisition module, configured to collect three-dimensional spatial information of a user's hand;
  • a three-dimensional virtual hand building module configured to construct a three-dimensional virtual hand consistent with the user's hand motion on the user interface according to the collected three-dimensional space information
  • a processing module configured to detect an operation of the three-dimensional virtual hand on the user interface, and perform a corresponding function according to the detected operation.
  • the collection module includes:
  • the coordinate acquiring unit is configured to periodically acquire the spatial coordinates of the point on the user's palm surface in the reference coordinate system.
  • the three-dimensional virtual hand building module includes:
  • a palm surface building unit configured to construct a virtual palm surface that is consistent with the movement of the user's palm surface according to the change of the three-dimensional coordinate of the point on the user's palm surface;
  • the hand back building unit is configured to construct a virtual hand back surface that conforms to the back movement of the user hand according to the virtual palm surface that is consistent with the movement of the user's palm surface.
  • the operation of the three-dimensional virtual hand on the user interface includes: at least one of an operation of rotating the user interface, an operation of dragging the user interface, an operation of reducing the user interface, and an operation of enlarging the user interface. item.
  • the user interface further includes: at least one three-dimensional function option;
  • the operation of the three-dimensional virtual hand-to-user interface further includes: an operation for requesting execution of a function corresponding to the function option.
  • Embodiments of the present invention have the following beneficial effects:
  • the embodiment of the present invention constructs a three-dimensional virtual hand consistent with the user's hand motion on the user interface, so that the user can directly control the user interface through the three-dimensional virtual hand in the user interface, and the operation method is simple and flexible, because the user passes the three-dimensional virtual
  • the user operation controls the user interface, so that the user's tactile and visual realism can be improved at the same time, and the user experience is greatly improved.
  • FIG. 1 is a schematic flow chart of a first embodiment of a method for controlling a user interface according to the present invention
  • FIG. 2 is a schematic flowchart of a second embodiment of a method for controlling a user interface of the present invention
  • FIG. 3 is a control of a user interface of the present invention.
  • FIG. 4 is a schematic structural diagram of an embodiment of the acquisition module shown in FIG. 3;
  • FIG. 5 is a schematic structural diagram of an embodiment of a three-dimensional virtual hand building module shown in FIG. 3;
  • FIG. 6 is a basic schematic diagram of binocular stereo vision of the present invention. detailed description
  • FIG. 1 is a schematic flowchart of a first embodiment of a user interface control method according to the present invention.
  • the method includes:
  • Step S11 Collect three-dimensional space information of the user's hand.
  • the three-dimensional spatial information of the user's hand may be three-dimensional spatial information of the user's palm surface or the back of the hand, or may be the three-dimensional spatial information of the entire hand. In order to reduce the amount of data processing, it is possible to obtain only three-dimensional spatial information of the user's palm surface.
  • the three-dimensional spatial information can be obtained by a three-dimensional ultrasound system, a three-dimensional camera, or a three-dimensional acquisition device including a laser beam splitting scanner and two cameras.
  • the three-dimensional acquisition device including the laser spectroscopic scanner and two cameras is based on binocular stereo vision technology.
  • the binocular stereo vision method is to use two cameras to simulate the eyes of the human eye to process the scene, observe the same scene from two viewpoints, obtain two images at different angles of view, and then infer by calculating the positional deviation between the corresponding points of the image.
  • the three-dimensional spatial information of the target object or target point in the scene is based on binocular stereo vision technology.
  • the laser beam splitting scanner includes: a beam splitter and a red laser tube.
  • a laser spectroscopic scanner is used to calibrate the points on the user's hand that need to acquire three-dimensional spatial information.
  • the beam splitter can disperse the red laser beam generated by the red laser tube into a plurality of rays parallel to the X-axis direction and a plurality of rays parallel to the x-axis direction, and usually, the light is invisible to the naked eye. These rays are intertwined to form an intersection of light rays arranged in an orderly manner in three dimensions (these light interlacing points are arranged not only on the pupil plane but also in the entire three-dimensional space). When the user's hand enters the light interlacing area, the light interlacing point located on the user's palm surface is the point where the three-dimensional space information needs to be collected.
  • the basic principle of binocular stereo vision is shown in Figure 6.
  • the following parameters 1 and r respectively mark the corresponding parameters of the left and right cameras.
  • a ray interlacing point located on the user's palm surface ⁇ ( ⁇ , ⁇ , Z) on the imaging planes C1 and Cr of the left and right cameras are al(ul, vl) and ar(ur, vr;), respectively.
  • These two image points are the image of point A, called the "conjugate point.”
  • the lines connecting the two conjugate image points to the optical centers 01 and Or of the respective cameras, that is, the projection lines alOl and arOr, respectively, are the points A.
  • the four points of 01, 0r, al, and ar have been determined, the three-dimensional coordinates of point A in real space can be obtained by calculation.
  • Step S12 Construct a three-dimensional virtual hand that is consistent with the user's hand motion on the user interface according to the collected three-dimensional space information.
  • the user interface can be designed to be virtual three-dimensional. It should be noted that the motion amplitude of the three-dimensional virtual hand in the virtual three-dimensional user interface is consistent with or proportional to the magnitude of the user's hand motion.
  • the three-dimensional spatial information of the user's hand may be the three-dimensional coordinates of the user's palm surface and/or the point on the back of the hand in a preset reference coordinate system.
  • a three-dimensional virtual coordinate with the same or equal-scale position coordinates of the user's hand can be constructed in the three-dimensional user interface according to the acquired three-dimensional coordinates of the palm surface and/or the point on the back of the hand.
  • the constructed 3D virtual hand is the same shape or size as the user's hand or its scaling.
  • the process of collecting the three-dimensional spatial information in step S11 is periodic, and the duration of the cycle is very short, for example, the user's palm surface and/or the back of the hand may be collected every 0.01 seconds.
  • the three-dimensional coordinates of the point is periodic, and the duration of the cycle is very short, for example, the user's palm surface and/or the back of the hand may be collected every 0.01 seconds.
  • the three-dimensional virtual hand is repeatedly constructed in the user interface. Since the construction period of the three-dimensional virtual hand is very short, the picture of the three-dimensional virtual hand seen by the user in the user interface is continuous. And the action of the three-dimensional virtual hand is exactly the same as the user's hand movement.
  • Step S13 detecting the operation of the three-dimensional virtual hand on the user interface, and performing the corresponding function according to the detected operation.
  • the operations of the three-dimensional virtual hand on the user interface include: operations of rotating the user interface, operations of dragging and dropping the user interface, operations of narrowing the user interface, and operations of enlarging the user interface.
  • the user interface includes: At least one three-dimensional function option.
  • the operation of the three-dimensional virtual hand to the user interface further includes: an operation of rotating the three-dimensional function option, an operation of dragging and dropping the three-dimensional function option, an operation of reducing the three-dimensional function option, an operation of enlarging the three-dimensional function option, and Request to perform the operation of the function corresponding to the function option.
  • the three-dimensional function options may be three-dimensional solid graphics, and each three-dimensional solid graphics may correspond to one or more functional options.
  • the operations for requesting the function corresponding to the function option include: drag, click, press, and so on.
  • the user's various operations on the user interface through the three-dimensional virtual hand will have its own unique action characteristics, by extracting the action feature information of the current operation of the three-dimensional virtual hand, and pre-existing the action features in the database (in the database) Each action feature is matched against a control command to determine the control command issued by the user to the user interface.
  • the three-dimensional coordinates of the acquired palm surface and/or the point on the back of the hand may be periodically sampled and compared (for example, the currently acquired three-dimensional space coordinates and the front are taken every 0.05 seconds).
  • the three-dimensional space coordinates acquired in 0.05 seconds are compared to determine the action trend of the user's hand, and then the action characteristic information of the current operation of the three-dimensional virtual hand is extracted according to the determined trend of the user's hand motion.
  • the user rotates the user interface its unique action characteristics are usually: The palm of the hand is raised, and the fingertips of the five fingers are rotated around the palm of the hand.
  • the corresponding motion feature information can be extracted from the periodically acquired three-dimensional space coordinates.
  • the action trend of the user's hand can be determined.
  • the user drags the user interface its unique action characteristics are usually: One or several fingers are pressed or pressed in a certain direction, and the same can be based on the three-dimensional space coordinates collected periodically. Obtain the moving direction and moving distance of the user's finger, and then determine the direction and drag distance of the user's desire to drag the user interface.
  • the unique action features are usually: palm lift, five fingertips close, the same can be obtained according to the three-dimensional space coordinates collected periodically to determine the user's finger folding range, and then determine the user accordingly It is desirable to reduce the magnitude of the user interface.
  • the unique action features are usually: The palm is lifted up, and the five-finger fingertips are opened outwards. Similarly, the three-dimensional space coordinates acquired periodically can be used to obtain the opening range of the user's finger. This determines the extent to which the user desires to zoom in on the user interface.
  • the user When a user needs to operate on a three-dimensional function option in the user interface, the user usually places the three-dimensional virtual hand in the user interface at or near the location of the three-dimensional function option by hand motion. Thus, when the user moves the three-dimensional virtual hand in the user interface by or near the position where the three-dimensional function option is located, it is considered that the user desires to operate the three-dimensional function option.
  • the user's operation of rotating, dragging, enlarging, and reducing the 3D function options is roughly the same as the user's operation of rotating, dragging, zooming in, and reducing the user interface.
  • the only difference is the magnitude of the motion, which can be differentiated according to the magnitude of the motion. Is the operation of the entire user interface or just a certain three-dimensional function option. For example, when the user rotates the user interface, the five-finger opening is usually larger, and when the user rotates a certain three-dimensional function option in the user interface, the five-finger opening is usually smaller, and the same is analyzed by the hand.
  • the three-dimensional coordinates can accurately obtain the amplitude of the five-finger opening of the user, so that the amplitude value can be compared with the preset amplitude value.
  • the value is greater than the preset amplitude value, the user is expected to rotate the entire user interface;
  • the amplitude value it is considered that the user desires to rotate the three-dimensional function option.
  • the user's operation for requesting the function corresponding to the three-dimensional function option includes: clicking, pressing, and the like.
  • the three-dimensional virtual hand in the user interface is usually placed at or near the position of the three-dimensional function option by moving the hand, thereby determining the user. The expectation is to operate on this three-dimensional option.
  • the user makes a click action its unique action characteristics will usually be: The index or middle finger is lifted and lowered, and the palm and other fingers are fixed, thereby determining the unique action characteristics of the operation according to the operation.
  • the user desires to perform the function corresponding to the three-dimensional function option.
  • the embodiment of the present invention constructs a three-dimensional virtual hand consistent with the user's hand motion on the user interface, so that the user can directly control the user interface through the three-dimensional virtual hand in the user interface, and the user does not need to bear a specific gesture, nor does the user Need to wear a sensor. And since the user controls the user interface through the three-dimensional virtual hand operation, the user's visual and tactile realism can be improved at the same time, and the user experience is improved.
  • FIG. 2 it is a schematic flowchart of a second embodiment of a user interface control method according to the present invention.
  • the method includes:
  • step S21 it is determined whether the shape of the object that needs to acquire the three-dimensional spatial information conforms to the hand shape. If the determination result is yes, the process proceeds to step S23, and if the determination result is otherwise, the process proceeds to step S22.
  • step S22 the three-dimensional spatial information of the object is stopped, and corresponding prompts are made on the user interface.
  • Step S23 periodically acquire the spatial coordinates of the point on the user's palm surface in the reference coordinate system.
  • the three-dimensional coordinates can be obtained by a three-dimensional ultrasound system, a three-dimensional camera, or a three-dimensional acquisition device that includes a laser beam splitter scanner and two cameras.
  • the three-dimensional acquisition device including the laser spectroscopic scanner and two cameras is based on binocular stereo vision technology.
  • the binocular stereo vision method is to use two cameras to simulate the eyes of the human eye to process the scene, observe the same scene from two viewpoints, obtain two images at different angles of view, and then infer by calculating the positional deviation between the corresponding points of the image.
  • the three-dimensional spatial information of the target object or target point in the scene is based on binocular stereo vision technology.
  • the laser beam splitting scanner includes: a beam splitter and a red laser tube.
  • a laser spectroscopic scanner is used to calibrate the points on the user's hand that need to acquire three-dimensional spatial information.
  • the beam splitter can disperse the red laser beam generated by the red laser tube into a plurality of rays parallel to the X-axis direction and a plurality of rays parallel to the Y-axis direction, and usually, the light is invisible to the naked eye. These rays are intertwined to form an intersection of light rays arranged in an orderly manner in three dimensions (these light interlacing points are arranged not only on the XY plane but also in the entire three-dimensional space). When the user's hand enters the light interlacing area, the light interlacing point located on the user's palm surface is the point where the three-dimensional space information needs to be collected.
  • the basic principle of binocular stereo vision is shown in Figure 6.
  • the following parameters 1 and r respectively mark the corresponding parameters of the left and right cameras.
  • the image points on the imaging planes C1 and Cr of the left and right cameras on the left and right sides of the camera are respectively al (ul, vl) and ar (ur, vr;). These two image points are the image of point A, called the "conjugate point.”
  • make these two conjugate image points with the optical centers 01 and Or of the respective cameras The connection, that is, the projection lines alOl and arOr, their intersection is point A. Therefore, under the premise that the four points of 01, Or, al, and ar have been determined, the three-dimensional coordinates of point A in real space can be obtained by calculation.
  • Step S24 according to the change of the three-dimensional space coordinates of the point on the user's palm surface, construct a virtual palm surface that is consistent with the movement of the user's palm surface; the virtual palm surface that is consistent with the movement of the user's palm surface is constructed to be consistent with the back movement of the user's hand The back of the virtual hand.
  • Step S25 detecting the operation of the three-dimensional virtual hand on the user interface, and performing the corresponding function according to the detected operation.
  • the operations of the three-dimensional virtual hand on the user interface include: operations of rotating the user interface, operations of dragging and dropping the user interface, operations of narrowing the user interface, and operations of enlarging the user interface.
  • the user interface includes: At least one three-dimensional function option.
  • the operation of the three-dimensional virtual hand to the user interface further includes: an operation of rotating the three-dimensional function option, an operation of dragging and dropping the three-dimensional function option, an operation of reducing the three-dimensional function option, an operation of enlarging the three-dimensional function option, and Request to perform the operation of the function corresponding to the function option.
  • the three-dimensional function options may be three-dimensional solid graphics, and each three-dimensional solid graphics may correspond to one or more functional options.
  • the operations for requesting the function corresponding to the function option include: drag, click, press, and so on.
  • the user's various operations on the user interface through the three-dimensional virtual hand will have its own unique action characteristics, by extracting the action feature information of the current operation of the three-dimensional virtual hand, and pre-existing the action features in the database (in the database) Each action feature is matched against a control command to determine the control command issued by the user to the user interface.
  • the three-dimensional coordinates of the acquired palm surface and/or the point on the back of the hand may be periodically sampled and compared (for example, the currently acquired three-dimensional space coordinates and the front are taken every 0.05 seconds).
  • the three-dimensional space coordinates acquired in 0.05 seconds are compared to determine the action trend of the user's hand, and then the action characteristic information of the current operation of the three-dimensional virtual hand is extracted according to the determined trend of the user's hand motion.
  • the embodiment of the invention constructs a three-dimensional virtual hand consistent with the user's hand movement on the user interface, so that the user can directly control the user interface through the three-dimensional virtual hand in the user interface, and the user does not need to bear a specific gesture or Wear the sensor. And since the user controls the user interface through the three-dimensional virtual hand operation, the user's visual and tactile realism can be improved at the same time, and the user experience is improved. 1 to 2, after explaining the embodiment of the control method of the user interface in detail, the apparatus corresponding to the flow of the above method will be described below with reference to the accompanying drawings. Please refer to FIG. 3, which is a schematic structural diagram of an embodiment of a control device for a user interface according to the present invention.
  • the control device 100 includes:
  • the collecting module 110 is configured to collect three-dimensional space information of the user's hand.
  • the function of the acquisition module 110 can be implemented by a three-dimensional ultrasound system, a three-dimensional camera, or a three-dimensional acquisition device including a laser beam splitter scanner and a camera.
  • the three-dimensional virtual hand building module 120 is configured to construct a three-dimensional virtual hand consistent with the user's hand motion on the user interface according to the collected three-dimensional space information.
  • the user interface can be designed to be virtual three-dimensional. It should be noted that the motion amplitude of the three-dimensional virtual hand in the virtual three-dimensional user interface is consistent with or proportional to the magnitude of the user's hand motion.
  • the three-dimensional spatial information of the user's hand may be the three-dimensional coordinates of the user's palm surface and/or the point on the back of the hand in a preset reference coordinate system.
  • a three-dimensional virtual coordinate with the same or equal-scale position coordinates of the user's hand can be constructed in the three-dimensional user interface according to the acquired three-dimensional coordinates of the palm surface and/or the point on the back of the hand.
  • the constructed 3D virtual hand is the same shape or size as the user's hand or its scaling.
  • the process of collecting the three-dimensional spatial information is periodic, and the duration of the cycle is short, for example, the three-dimensional surface of the user's palm surface and/or the back of the hand can be collected every 0.01 seconds.
  • Spatial coordinates According to the three-dimensional space coordinates acquired many times, the three-dimensional virtual hand is repeatedly constructed in the user interface. Since the construction period of the three-dimensional virtual hand is short, the picture of the three-dimensional virtual hand seen by the user in the user interface is continuous. And the action of the three-dimensional virtual hand is exactly the same as the user's hand movement.
  • the processing module 130 is configured to detect an operation of the three-dimensional virtual hand on the user interface, and perform a corresponding function according to the detected operation.
  • the operations of the three-dimensional virtual hand on the user interface include: operations of rotating the user interface, operations of dragging and dropping the user interface, operations of narrowing the user interface, and operations of enlarging the user interface.
  • the user interface includes: At least one three-dimensional function option.
  • the operation of the three-dimensional virtual hand to the user interface further includes: an operation of rotating the three-dimensional function option, an operation of dragging and dropping the three-dimensional function option, an operation of reducing the three-dimensional function option, an operation of enlarging the three-dimensional function option, and Request to perform the operation of the function corresponding to the function option.
  • the three-dimensional function option can be three Dimensional graphics, each of which can correspond to one or more functional options.
  • the operations for requesting the function corresponding to the execution of the function option include: dragging, clicking, pressing, and the like.
  • the user's various operations on the user interface through the three-dimensional virtual hand will have its own unique action characteristics, by extracting the action feature information of the current operation of the three-dimensional virtual hand, and pre-existing the action features in the database (in the database) Each action feature is matched against a control command to determine the control command issued by the user to the user interface.
  • the three-dimensional coordinates of the acquired palm surface and/or the point on the back of the hand may be periodically sampled and compared (for example, the currently acquired three-dimensional space coordinates and the front are taken every 0.05 seconds).
  • the three-dimensional space coordinates acquired in 0.05 seconds are compared to determine the action trend of the user's hand, and then the action characteristic information of the current operation of the three-dimensional virtual hand is extracted according to the determined trend of the user's hand motion.
  • the user rotates the user interface
  • its unique action characteristics are usually: The palm of the hand is raised, and the fingertips of the five fingers are rotated around the palm of the hand.
  • the corresponding motion feature information can be extracted from the periodically acquired three-dimensional space coordinates.
  • the action trend of the user's hand can be determined. , to determine the direction and curvature of the user's desire to rotate the user interface.
  • the unique action features are usually: palm lift, five fingertips close, the same can be obtained according to the three-dimensional space coordinates collected periodically to determine the user's finger folding range, and then determine the user accordingly It is desirable to reduce the magnitude of the user interface.
  • the unique action features are usually: The palm is lifted up, and the five-finger fingertips are opened outwards. Similarly, the three-dimensional space coordinates acquired periodically can be used to obtain the opening range of the user's finger. This determines the extent to which the user desires to zoom in on the user interface.
  • the user When the user needs to operate on a certain three-dimensional function option in the user interface, the user usually puts the three-dimensional virtual hand in the user interface at or near the position of the three-dimensional function option through the hand motion. Therefore, when the user moves the three-dimensional virtual hand in the user interface by the hand motion or is close to the position where the three-dimensional function option is located, it is considered that the user desires to select the three-dimensional function.
  • the item operates.
  • the user's operation of rotating, dragging, enlarging, and reducing the 3D function options is roughly the same as the user's operation of rotating, dragging, zooming in, and reducing the user interface.
  • the only difference is the magnitude of the motion, which can be differentiated according to the magnitude of the motion. Is the operation of the entire user interface or just a certain three-dimensional function option. For example, when the user rotates the user interface, the five-finger opening is usually larger, and when the user rotates a certain three-dimensional function option in the user interface, the five-finger opening is usually smaller, and the same is analyzed by the hand.
  • the three-dimensional coordinates can accurately obtain the amplitude of the five-finger opening of the user, so that the amplitude value can be compared with the preset amplitude value.
  • the value is greater than the preset amplitude value, the user is expected to rotate the entire user interface;
  • the amplitude value it is considered that the user desires to rotate the three-dimensional function option.
  • the user's operation for requesting the function corresponding to the three-dimensional function option includes: clicking, pressing, and the like.
  • the three-dimensional virtual hand in the user interface is usually placed at or near the position of the three-dimensional function option by moving the hand, thereby determining the user. The expectation is to operate on this three-dimensional option.
  • the user makes a click action its unique action characteristics will usually be: The index or middle finger is lifted and lowered, and the palm and other fingers are fixed, thereby determining the unique action characteristics of the operation according to the operation.
  • the user desires to perform the function corresponding to the three-dimensional function option.
  • the embodiment of the invention constructs a three-dimensional virtual hand consistent with the user's hand movement on the user interface, so that the user can directly control the user interface through the three-dimensional virtual hand in the user interface, and the user does not need to bear a specific gesture or Wear the sensor. And since the user controls the user interface through the three-dimensional virtual hand operation, the user's visual and tactile realism can be improved at the same time, and the user experience is improved.
  • FIG. 4 is a schematic structural diagram of an embodiment of the acquisition module shown in FIG.
  • the three-dimensional hand acquisition module 110 includes:
  • the determining unit 111 is configured to determine whether the shape of the object that needs to acquire the three-dimensional spatial information conforms to the hand shape, and if the hand shape is met, collect the three-dimensional spatial information of the user's hand motion; if the hand shape is not met, the determination is made in the user interface. The corresponding prompt.
  • the coordinate acquiring unit 112 is configured to periodically acquire the spatial coordinates of the point on the user's palm surface in the reference coordinate system.
  • the coordinate acquisition unit 112 can be implemented by a three-dimensional ultrasound system, a three-dimensional camera, or a three-dimensional acquisition device including a laser beam splitting scanner and two cameras.
  • the three-dimensional acquisition device including the laser spectroscopic scanner and two cameras is based on binocular stereo Visual technology.
  • the binocular stereo vision method is to use two cameras to simulate the eyes of the human eye to process the scene, observe the same scene from two viewpoints, obtain two images at different angles of view, and then infer by calculating the positional deviation between the corresponding points of the image.
  • the three-dimensional spatial information of the target object or target point in the scene is based on binocular stereo Visual technology.
  • the three-dimensional spatial information of the user's hand may be three-dimensional spatial information of the user's palm surface or the back of the hand, or may be the three-dimensional spatial information of the entire hand. In order to reduce the amount of data processing, it is possible to obtain only three-dimensional spatial information of the user's palm surface.
  • the laser beam splitting scanner includes: a beam splitter and a red laser tube.
  • a laser spectroscopic scanner is used to calibrate the points on the user's hand that need to acquire three-dimensional spatial information.
  • the beam splitter can disperse the red laser beam generated by the red laser tube into a plurality of rays parallel to the X-axis direction and a plurality of rays parallel to the Y-axis direction, and usually, the light is invisible to the naked eye. These rays are intertwined to form an intersection of light rays arranged in an orderly manner in three dimensions (these light interlacing points are arranged not only on the XY plane but also in the entire three-dimensional space). When the user's hand enters the light interlacing area, the light interlacing point located on the user's palm surface is the point where the three-dimensional space information needs to be collected.
  • the basic principle of binocular stereo vision is shown in Figure 6.
  • the following parameters 1 and r respectively mark the corresponding parameters of the left and right cameras.
  • a ray interlacing point located on the user's palm surface ⁇ ( ⁇ , ⁇ , Z) on the imaging planes C1 and Cr of the left and right cameras are al(ul, vl) and ar(ur, vr;), respectively.
  • These two image points are the image of point A, called the "conjugate point.”
  • the lines connecting the two conjugate image points to the optical centers 01 and Or of the respective cameras, that is, the projection lines alOl and arOr, respectively, are the points A. Therefore, under the premise that the four points of 01, 0r, al, and ar have been determined, the three-dimensional coordinates of point A in real space can be obtained by calculation.
  • FIG. 5 is a schematic structural diagram of an embodiment of the three-dimensional virtual hand building module shown in FIG. 3 .
  • the three-dimensional virtual hand building module 120 includes:
  • the palm surface building unit 121 is configured to construct a virtual palm surface that is consistent with the movement of the user's palm surface according to the change of the three-dimensional space coordinates of the point on the user's palm surface;
  • the hand rear construction unit 122 is configured to construct a virtual hand back surface that conforms to the back movement of the user's hand according to the virtual palm surface that is consistent with the movement of the user's palm surface.
  • the palm surface construction unit 121 simulates the reproduction of the other surface of the palm according to the spatial coordinates of the points acquired on the user's palm surface.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明实施例公开了一种用户界面的控制方法,包括:采集用户手部的三维空间信息;根据采集的三维空间信息,在用户界面上构建出与用户手部动作一致的三维虚拟手;检测三维虚拟手对用户界面的操作,并根据检测到的操作,执行相应的功能。本发明实施例还公开了一种用户界面的控制装置。采用本发明,可以让用户灵活和方便的控制用户界面,提高用户体验。

Description

一种用户界面的控制方法及装置 技术领域
本发明涉及通信领域, 尤其涉及一种用户界面的控制方法及装置。 背景技术
随着科技的发展, 人们在人性化和智能化上对多媒体设备要求越来越高, 越来越多的多媒体设备置有手势控制功能, 手势控制功能使用户仅仅通过手势 动作便能对多媒体设备下达控制指令。 目前, 手势控制装置大体分为两类: 一 类是采用摄像头来识别手势动作, 该类装置需要识别手的投影是否符合特定形 状(每一种特定形状对应一种指令); 另一类是采用需要用户佩戴的传感器来识 别手势动作。 这两类装置的共性是代表控制命令的手势固定且单一, 并且需要 用户熟记各种手势所对应的控制指令, 当指令过多时, 很容易让人混淆, 使得 用户体验不佳。 发明内容
本发明实施例所要解决的技术问题在于, 提供一种用户界面的控制方法及 装置。 可以让用户灵活和方便的控制用户界面, 提高用户体验。
为了解决上述技术问题, 本发明实施例提供了一种用户界面的控制方法, 包括:
采集用户手部的三维空间信息;
根据所述采集的三维空间信息, 在用户界面上构建出与用户手部动作一致 的三维虚拟手;
检测所述三维虚拟手对所述用户界面的操作, 并根据所述检测到的操作, 执行相应的功能。
其中, 所述采集用户手部动作的三维空间信息包括:
周期性获取用户的手心面上的点在参考坐标系中的空间坐标。
其中, 所述根据所述采集的三维空间信息, 在用户界面上构建出与用户手 部动作一致的三维虚拟手包括:
根据用户手心面上的点的三维空间坐标的变化, 构建出与用户手心面动作 一致的虚拟手心面;
根据所述与用户手心面动作一致的虚拟手心面, 构建出与用户手背面动作 一致的虚拟手背面。
其中, 所述三维虚拟手对用户界面的操作包括: 旋转所述用户界面的操作、 拖拽所述用户界面的操作、 缩小所述用户界面的操作和放大所述用户界面的操 作中的至少一项。
其中, 所述用户界面还包括: 至少一个三维式的功能选项;
所述三维虚拟手对用户界面的操作还包括: 用于请求执行所述功能选项所 对应的功能的操作。
相应地, 本发明实施例还提供了一种用户界面的控制装置, 包括: 采集模块, 用于采集用户手部的三维空间信息;
三维虚拟手构建模块, 用于根据所述采集的三维空间信息, 在用户界面上 构建出与用户手部动作一致的三维虚拟手;
处理模块, 用于检测所述三维虚拟手对所述用户界面的操作, 并根据所述 检测到的操作, 执行相应的功能。
其中, 所述采集模块包括:
坐标获取单元, 用于周期性获取用户的手心面上的点在参考坐标系中的空 间坐标。
其中, 所述三维虚拟手构建模块包括:
手心面构建单元, 用于根据用户手心面上的点的三维空间坐标的变化, 构 建出与用户手心面动作一致的虚拟手心面;
手背面构建单元, 用于根据所述与用户手心面动作一致的虚拟手心面, 构 建出与用户手背面动作一致的虚拟手背面。
其中, 所述三维虚拟手对用户界面的操作包括: 旋转所述用户界面的操作、 拖拽所述用户界面的操作、 缩小所述用户界面的操作和放大所述用户界面的操 作中的至少一项。
其中, 所述用户界面还包括: 至少一个三维式的功能选项;
所述三维虚拟手对用户界面的操作还包括: 用于请求执行所述功能选项所 对应的功能的操作。
实施本发明实施例, 具有如下有益效果: 本发明的实施例通过在用户界面上构建出与用户手部动作一致的三维虚拟 手, 使得用户可以通过用户界面中的三维虚拟手直接控制用户界面, 操作方法 简单且灵活, 由于用户通过三维虚拟手操作控制用户界面, 因此可以同时提高 用户触觉和视觉的真实度, 大大提高了用户体验。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实施 例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面描述 中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付 出创造性劳动的前提下, 还可以根据这些附图获得其他的附图。
图 1是本发明的用户界面的控制方法的第一实施例的流程示意图; 图 2是本发明的用户界面的控制方法的第二实施例的流程示意图; 图 3是本发明的用户界面的控制装置的实施例的结构示意图;
图 4是图 3所示的采集模块的实施例的结构示意图;
图 5是图 3所示的三维虚拟手构建模块的实施例的结构示意图;
图 6是本发明的双目立体视觉的基本原理图。 具体实施方式
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进行清 楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是 全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。
请参照图 1 , 是本发明的用户界面控制方法的第一实施例的流程示意图。 所 述方法包括:
步骤 S11 , 采集用户手部的三维空间信息。
用户手部的三维空间信息可以是用户手心面或手背面的三维空间信息, 也 可以是整只手的三维空间信息。 为了减少数据处理量, 可以只获取用户手心面 的三维空间信息。
三维空间信息可以通过三维超声仪、 三维摄像机、 或者包含激光分光扫描 器与两个摄像机的三维采集装置来获取。 其中, 包含激光分光扫描器与两个摄像机的三维采集装置是基于双目立体 视觉技术的。 双目立体视觉方法就是利用两个摄像机模拟人眼双眼处理景物的 方式, 从两个视点观察同一场景, 获得不同视角下的两个图像, 然后通过计算 图像对应点间的位置偏差, 便能推断出场景中目标物体或目标点的三维空间信 息。
具体地, 激光分光扫描器包括: 分光镜和红激光管。 激光分光扫描器用于 标定用户手部上的需要采集三维空间信息的点。
分光镜可以将红激光管产生的红激光束分散成多条平行于 X轴方向的光线 和多条平行于 Υ轴方向的光线, 并且通常情况下, 这些光线是肉眼不可见的。 这些光线互相交织, 形成在三维空间上有序排列的光线交点 (这些光线交织点 不止排布在 ΧΥ平面上, 在整个三维空间均有排布) 。 当用户的手部进入该光 线交织区域时, 位于用户手心面上的光线交织点即为需要采集三维空间信息的 点。
双目立体视觉的基本原理如图 6所示。 图中分别以下标 1和 r标注左、 右摄 像机的相应参数。 位于用户手心面上某一光线交织点 Α(Χ, Υ, Z)在左右摄像机 的成像面 C1和 Cr上的像点分别为 al(ul, vl)和 ar(ur, vr;)。 这两个像点是 A点的 像, 称为"共轭点"。 分别作出这两个共轭像点与各自相机的光心 01和 Or的连 线, 即投影线 alOl和 arOr, 它们的交点即为 A点。 在 01、 0r、 al、 ar四点已确 定的前提下, 通过计算便能得到 A点在现实空间的三维坐标。
步骤 S12,根据采集的三维空间信息,在用户界面上构建出与用户手部动作 一致的三维虚拟手。
为了更好的提高用户的视觉体验, 用户界面可以设计成为虚拟三维式的。 应该注意的是, 三维虚拟手在虚拟三维式的用户界面中动作幅度与用户手部动 作幅度一致或是等比例缩放。
用户手部的三维空间信息可以是用户手心面和 /或手背面上的点在预设的参 考坐标系中的三维空间坐标。
在构建三维虚拟手时, 可以根据获取的手心面和 /或手背面上的点的三维空 间坐标, 在三维的用户界面中构建出与用户手部具有相同或等比例缩放的位置 坐标的三维虚拟手, 值得注意的是, 构建出的三维虚拟手与用户手部形状、 大 小相同或是其的等比例缩放。 为了能够获取用户手部的动作信息, 步骤 S11 中的三维空间信息的采集过 程是周期性的, 并且这个周期的时长很短, 例如可以每 0.01秒采集一次用户手 心面和 /或手背面的上点的三维空间坐标。 根据多次采集的三维空间坐标, 在用 户界面中重复性构建出三维虚拟手, 由于三维虚拟手的构建周期很短, 这就使 得用户在用户界面中看到的三维虚拟手的画面就是连续的, 且三维虚拟手的动 作与用户手部动作完全一致。
步骤 S13 , 检测三维虚拟手对用户界面的操作, 并根据检测到的操作, 执行 相应的功能。
通常三维虚拟手对用户界面的操作包括: 旋转用户界面的操作、 拖拽用户 界面的操作、 缩小用户界面的操作和放大用户界面的操作等。
此外, 用户界面还包括: 至少一个三维式的功能选项。 相应地, 三维虚拟 手对用户界面的操作还包括: 旋转三维式功能选项的操作、 拖拽三维式功能选 项的操作、 缩小三维式功能选项的操作和放大三维式功能选项的操作、 以及用 于请求执行功能选项所对应的功能的操作。 其中, 三维式的功能选项可以是三 维立体图形, 每个三维立体图形可以对应一个或多个功能选项。 用于请求执行 功能选项所对应的功能的操作包括: 拖拽、 点击、 按等。
通常情况下, 用户通过三维虚拟手对用户界面的各种操作都会有其特有的 动作特征, 通过提取三维虚拟手的当前操作的动作特征信息, 并将其与数据库 中预存的动作特征(数据库中每一动作特征都会对应一控制命令)进行匹配, 从而确定用户对用户界面下达的控制命令。
在检测三维虚拟手对用户界面的操作时, 可以定期性将获取的手心面和 /或 手背面上的点的三维空间坐标进行抽样对比(例如每隔 0.05秒将当前采集的三 维空间坐标和前 0.05秒采集的三维空间坐标进行比较) , 从而确定用户手部的 动作趋势, 再根据确定的用户手部动作趋势, 来提取三维虚拟手的当前操作的 动作特征信息。
例如, 用户在旋转用户界面时, 其特有的动作特征通常会是: 手掌心抬起, 五指指尖绕着手掌心转动。 其相应的动作特征信息可以从周期性采集的三维空 间坐标的来提取, 通过将不同时间获取的用户手心面和 /或手背面的三维空间坐 标进行分析和对比, 可以判定用户手部的动作趋势, 进而确定用户期望旋转用 户界面的方向及弧度。 用户在拖拽用户界面时, 其特有的动作特征通常会是: 一根或几根手指做 按住状或捏住状沿某一方向直线运动, 同样得可以根据周期性采集的三维空间 坐标来获取用户手指的移动方向和移动距离, 进而依此来确定用户期望拖拽用 户界面的方向及拖拽距离。
用户在缩小用户界面时, 其特有的动作特征通常会是: 掌心抬起, 五指指 尖收拢, 同样得可以根据周期性采集的三维空间坐标来获取用户手指的收拢幅 度, 进而依此来确定用户期望缩小用户界面的幅度。
用户在放大用户界面时, 其特有的动作特征通常会是: 掌心抬起, 五指指 尖向外张开, 同样得可以根据周期性采集的三维空间坐标来获取用户手指的张 开幅度, 进而依此来确定用户期望放大用户界面的幅度。
当用户需要对用户界面中的某一三维式功能选项做出操作时, 通常情况下, 用户会通过手部动作将用户界面中的三维虚拟手放在或靠近该三维式功能选项 所在的位置。 由此, 当用户通过手部动作将用户界面中的三维虚拟手移放在或 者靠近三维式功能选项所在的位置时, 则认为用户期望的是对该三维式功能选 项进行操作。
用户旋转、 拖拽、 放大、 缩小三维式功能选项的操作与用户旋转、 拖拽、 放大、 缩小用户界面的操作大致相同, 不同之处只在于动作幅度的大小, 可以 根据动作幅度的大小来区分是对整个用户界面或只是某一三维式功能选项的操 作。 例如, 用户在旋转用户界面时, 通常五指张开的幅度较大, 而用户在旋转 用户界面中的某一三维式功能选项时, 通常五指张开的幅度较小, 同样的通过 分析手部的三维空间坐标便能准确获取用户五指张开的幅度, 如此可以将该幅 度值与预设的幅度值进行比较, 大于预设幅度值时, 则认为用户期望的是旋转 整个用户界面; 小于预设幅度值时, 则认为用户期望的是旋转三维式功能选项。
通常用户在请求执行三维式功能选项所对应的功能的操作包括: 点击、 按 等。 当用户期望执行某一三维式功能选项所对应的功能时, 通常会通过移动手 部将用户界面中的三维虚拟手置于或靠近该三维式功能选项所在的位置, 依此, 便可以确定用户期望是对该三维式选项进行操作。 当用户做出点击动作时, 其 特有的动作特征通常会是: 食指或中指抬起再放下, 而手掌心和其它手指固定 不动, 由此, 根据该操作其特有的动作特征, 便可以确定用户期望执行该三维 式功能选项所对应的功能。 本发明的实施例通过在用户界面上构建出与用户手部动作一致的三维虚拟 手, 使得用户可以通过用户界面中的三维虚拟手直接控制用户界面, 用户不需 要牵记特定的手势, 也不需要佩戴传感器。 并且由于用户通过三维虚拟手操作 控制用户界面, 因此可以同时提高用户视觉和触觉的真实度, 提高用户体验度。
请参照图 2, 是本发明的用户界面控制方法的第二实施例的流程示意图。 所 述方法包括:
步骤 S21 , 判断所需采集三维空间信息的物体的形状是否符合手型,如果判 断结果为是则进入步骤 S23 , 如果判断结果为否则进入步骤 S22。
步骤 S22,停止采集该物体的三维空间信息, 并在用户界面上作出相应的提 示。
步骤 S23 , 周期性获取用户的手心面上的点在参考坐标系中的空间坐标。 三维空间坐标可以通过三维超声仪、 三维摄像机、 或者包含激光分光扫描 器与两个摄像机的三维采集装置来获取。
其中, 包含激光分光扫描器与两个摄像机的三维采集装置是基于双目立体 视觉技术的。 双目立体视觉方法就是利用两个摄像机模拟人眼双眼处理景物的 方式, 从两个视点观察同一场景, 获得不同视角下的两个图像, 然后通过计算 图像对应点间的位置偏差, 便能推断出场景中目标物体或目标点的三维空间信 息。
具体地, 激光分光扫描器包括: 分光镜和红激光管。 激光分光扫描器用于 标定用户手部上的需要采集三维空间信息的点。
分光镜可以将红激光管产生的红激光束分散成多条平行于 X轴方向的光线 和多条平行于 Y轴方向的光线, 并且通常情况下, 这些光线是肉眼不可见的。 这些光线互相交织, 形成在三维空间上有序排列的光线交点 (这些光线交织点 不止排布在 XY平面上, 在整个三维空间均有排布) 。 当用户的手部进入该光 线交织区域时, 位于用户手心面上的光线交织点即为需要采集三维空间信息的 点。
双目立体视觉的基本原理如图 6所示。 图中分别以下标 1和 r标注左、 右摄 像机的相应参数。 位于用户手心面上某一光线交织点 Α(Χ, Υ, Z)在左右摄像机 的成像面 C1和 Cr上的像点分别为 al(ul, vl)和 ar(ur, vr;)。 这两个像点是 A点的 像, 称为"共轭点"。 分别作出这两个共轭像点与各自摄像机的光心 01和 Or的 连线, 即投影线 alOl和 arOr, 它们的交点即为 A点。 因此在 01、 Or、 al、 ar四 点已确定的前提下, 通过计算便能得到 A点在现实空间的三维坐标。
步骤 S24,根据用户手心面上的点的三维空间坐标的变化, 构建出与用户手 心面动作一致的虚拟手心面; 居与用户手心面动作一致的虚拟手心面, 构建 出与用户手背面动作一致的虚拟手背面。
步骤 S25 , 检测三维虚拟手对用户界面的操作, 并根据检测到的操作, 执行 相应的功能。
通常三维虚拟手对用户界面的操作包括: 旋转用户界面的操作、 拖拽用户 界面的操作、 缩小用户界面的操作和放大用户界面的操作等。
此外, 用户界面还包括: 至少一个三维式的功能选项。 相应地, 三维虚拟 手对用户界面的操作还包括: 旋转三维式功能选项的操作、 拖拽三维式功能选 项的操作、 缩小三维式功能选项的操作和放大三维式功能选项的操作、 以及用 于请求执行功能选项所对应的功能的操作。 其中, 三维式的功能选项可以是三 维立体图形, 每个三维立体图形可以对应一个或多个功能选项。 用于请求执行 功能选项所对应的功能的操作包括: 拖拽、 点击、 按等。
通常情况下, 用户通过三维虚拟手对用户界面的各种操作都会有其特有的 动作特征, 通过提取三维虚拟手的当前操作的动作特征信息, 并将其与数据库 中预存的动作特征(数据库中每一动作特征都会对应一控制命令)进行匹配, 从而确定用户对用户界面下达的控制命令。
在检测三维虚拟手对用户界面的操作时, 可以定期性将获取的手心面和 /或 手背面上的点的三维空间坐标进行抽样对比(例如每隔 0.05秒将当前采集的三 维空间坐标和前 0.05秒采集的三维空间坐标进行比较) , 从而确定用户手部的 动作趋势, 再根据确定的用户手部动作趋势, 来提取三维虚拟手的当前操作的 动作特征信息。
发明的实施例通过在用户界面上构建出与用户手部动作一致的三维虚拟 手, 使得用户可以通过用户界面中的三维虚拟手直接控制用户界面, 用户不需 要牵记特定的手势, 也不需要佩戴传感器。 并且由于用户通过三维虚拟手操作 控制用户界面, 因此可以同时提高用户视觉和触觉的真实度, 提高用户体验度。 图 1至图 2对用户界面的控制方法的实施例进行了详细阐述后, 下面将继续结 合附图, 对相应于上述方法流程的装置进行说明。 请参照图 3 , 是本发明的用户界面的控制装置的实施例的结构示意图。 所述 控制装置 100包括:
采集模块 110, 用于采集用户手部的三维空间信息。
其中, 采集模块 110 的功能可以由三维超声仪、 三维摄像机、 或者包含激 光分光扫描器与摄像机的三维采集装置来实现。
三维虚拟手构建模块 120, 用于根据采集的三维空间信息, 在用户界面上构 建出与用户手部动作一致的三维虚拟手。
为了更好的提高用户的视觉体验, 用户界面可以设计成为虚拟三维式的。 应该注意的是, 三维虚拟手在虚拟三维式的用户界面中动作幅度与用户手部动 作幅度一致或是等比例缩放。
用户手部的三维空间信息可以是用户手心面和 /或手背面上的点在预设的参 考坐标系中的三维空间坐标。
在构建三维虚拟手时, 可以根据获取的手心面和 /或手背面上的点的三维空 间坐标, 在三维的用户界面中构建出与用户手部具有相同或等比例缩放的位置 坐标的三维虚拟手, 值得注意的是, 构建出的三维虚拟手与用户手部形状、 大 小相同或是其的等比例缩放。
为了能够获取用户手部的动作信息, 三维空间信息的采集过程是周期性的 , 并且这个周期的时长 4艮短, 例如可以每 0.01秒采集一次用户手心面和 /或手背面 的上点的三维空间坐标。 根据多次采集的三维空间坐标, 在用户界面中重复性 构建出三维虚拟手, 由于三维虚拟手的构建周期 ^艮短, 这就使得用户在用户界 面中看到的三维虚拟手的画面就是连续的, 且三维虚拟手的动作与用户手部动 作完全一致。
处理模块 130, 用于检测三维虚拟手对用户界面的操作, 并根据检测到的操 作, 执行相应的功能。
通常三维虚拟手对用户界面的操作包括: 旋转用户界面的操作、 拖拽用户 界面的操作、 缩小用户界面的操作和放大用户界面的操作等。
此外, 用户界面还包括: 至少一个三维式的功能选项。 相应地, 三维虚拟 手对用户界面的操作还包括: 旋转三维式功能选项的操作、 拖拽三维式功能选 项的操作、 缩小三维式功能选项的操作和放大三维式功能选项的操作、 以及用 于请求执行功能选项所对应的功能的操作。 其中, 三维式的功能选项可以是三 维立体图形, 每个三维立体图形可以对应一个或多个功能选项。 用于请求执行 功能选项所对应的功能的操作包括: 拖拽、 点击、 按等。
通常情况下, 用户通过三维虚拟手对用户界面的各种操作都会有其特有的 动作特征, 通过提取三维虚拟手的当前操作的动作特征信息, 并将其与数据库 中预存的动作特征(数据库中每一动作特征都会对应一控制命令)进行匹配, 从而确定用户对用户界面下达的控制命令。
在检测三维虚拟手对用户界面的操作时, 可以定期性将获取的手心面和 /或 手背面上的点的三维空间坐标进行抽样对比(例如每隔 0.05秒将当前采集的三 维空间坐标和前 0.05秒采集的三维空间坐标进行比较) , 从而确定用户手部的 动作趋势, 再根据确定的用户手部动作趋势, 来提取三维虚拟手的当前操作的 动作特征信息。
例如, 用户在旋转用户界面时, 其特有的动作特征通常会是: 手掌心抬起, 五指指尖绕着手掌心转动。 其相应的动作特征信息可以从周期性采集的三维空 间坐标的来提取, 通过将不同时间获取的用户手心面和 /或手背面的三维空间坐 标进行分析和对比, 可以判定用户手部的动作趋势, 进而确定用户期望旋转用 户界面的方向及弧度。
用户在拖拽用户界面时, 其特有的动作特征通常会是: 一根或几根手指做 按住状或捏住状沿某一方向直线运动, 同样得可以根据周期性采集的三维空间 坐标来获取用户手指的移动方向和移动距离, 进而依此来确定用户期望拖拽用 户界面的方向及拖拽距离。
用户在缩小用户界面时, 其特有的动作特征通常会是: 掌心抬起, 五指指 尖收拢, 同样得可以根据周期性采集的三维空间坐标来获取用户手指的收拢幅 度, 进而依此来确定用户期望缩小用户界面的幅度。
用户在放大用户界面时, 其特有的动作特征通常会是: 掌心抬起, 五指指 尖向外张开, 同样得可以根据周期性采集的三维空间坐标来获取用户手指的张 开幅度, 进而依此来确定用户期望放大用户界面的幅度。
当用户需要对用户界面中的某一三维式功能选项做出操作时, 通常情况下, 用户会通过手部动作将用户界面中的三维虚拟手放在或靠近该三维式功能选项 所在的位置。 由此, 当用户通过手部动作将用户界面中的三维虚拟手移放在或 者靠近三维式功能选项所在的位置时, 则认为用户期望的是对该三维式功能选 项进行操作。
用户旋转、 拖拽、 放大、 缩小三维式功能选项的操作与用户旋转、 拖拽、 放大、 缩小用户界面的操作大致相同, 不同之处只在于动作幅度的大小, 可以 根据动作幅度的大小来区分是对整个用户界面或只是某一三维式功能选项的操 作。 例如, 用户在旋转用户界面时, 通常五指张开的幅度较大, 而用户在旋转 用户界面中的某一三维式功能选项时, 通常五指张开的幅度较小, 同样的通过 分析手部的三维空间坐标便能准确获取用户五指张开的幅度, 如此可以将该幅 度值与预设的幅度值进行比较, 大于预设幅度值时, 则认为用户期望的是旋转 整个用户界面; 小于预设幅度值时, 则认为用户期望的是旋转三维式功能选项。
通常用户在请求执行三维式功能选项所对应的功能的操作包括: 点击、 按 等。 当用户期望执行某一三维式功能选项所对应的功能时, 通常会通过移动手 部将用户界面中的三维虚拟手置于或靠近该三维式功能选项所在的位置, 依此, 便可以确定用户期望是对该三维式选项进行操作。 当用户做出点击动作时, 其 特有的动作特征通常会是: 食指或中指抬起再放下, 而手掌心和其它手指固定 不动, 由此, 根据该操作其特有的动作特征, 便可以确定用户期望执行该三维 式功能选项所对应的功能。
发明的实施例通过在用户界面上构建出与用户手部动作一致的三维虚拟 手, 使得用户可以通过用户界面中的三维虚拟手直接控制用户界面, 用户不需 要牵记特定的手势, 也不需要佩戴传感器。 并且由于用户通过三维虚拟手操作 控制用户界面, 因此可以同时提高用户视觉和触觉的真实度, 提高用户体验度。 请参照图 4,是图 3所示的采集模块的实施例的结构示意图。 所述三维手型采集 模块 110包括:
判断单元 111 , 用于判断所需采集三维空间信息的物体的形状是否符合手 型, 如果符合手型, 则采集用户手部动作的三维空间信息; 如不符合手型, 则 在用户界面中作出相应的提示。
坐标获取单元 112,用于周期性获取用户的手心面上的点在参考坐标系中的 空间坐标。
其中, 坐标获取单元 112可以由三维超声仪、 三维摄像机、 或者包含激光 分光扫描器与两个摄像机的三维采集装置来实现。
其中, 包含激光分光扫描器与两个摄像机的三维采集装置是基于双目立体 视觉技术的。 双目立体视觉方法就是利用两个摄像机模拟人眼双眼处理景物的 方式, 从两个视点观察同一场景, 获得不同视角下的两个图像, 然后通过计算 图像对应点间的位置偏差, 便能推断出场景中目标物体或目标点的三维空间信 息。
用户手部的三维空间信息可以是用户手心面或手背面的三维空间信息, 也 可以是整只手的三维空间信息。 为了减少数据处理量, 可以只获取用户手心面 的三维空间信息。
具体地, 激光分光扫描器包括: 分光镜和红激光管。 激光分光扫描器用于 标定用户手部上的需要采集三维空间信息的点。
分光镜可以将红激光管产生的红激光束分散成多条平行于 X轴方向的光线 和多条平行于 Y轴方向的光线, 并且通常情况下, 这些光线是肉眼不可见的。 这些光线互相交织, 形成在三维空间上有序排列的光线交点 (这些光线交织点 不止排布在 XY平面上, 在整个三维空间均有排布) 。 当用户的手部进入该光 线交织区域时, 位于用户手心面上的光线交织点即为需要采集三维空间信息的 点。
双目立体视觉的基本原理如图 6所示。 图中分别以下标 1和 r标注左、 右摄 像机的相应参数。 位于用户手心面上某一光线交织点 Α(Χ, Υ, Z)在左右摄像机 的成像面 C1和 Cr上的像点分别为 al(ul, vl)和 ar(ur, vr;)。 这两个像点是 A点的 像, 称为"共轭点"。 分别作出这两个共轭像点与各自摄像机的光心 01和 Or的 连线, 即投影线 alOl和 arOr, 它们的交点即为 A点。 因此在 01、 0r、 al、 ar四 点已确定的前提下, 通过计算便能得到 A点在现实空间的三维坐标。
请参照图 5 ,是图 3所示的三维虚拟手构建模块的实施例的结构示意图。 所 述三维虚拟手构建模块 120包括:
手心面构建单元 121 , 用于才艮据用户手心面上的点的三维空间坐标的变化, 构建出与用户手心面动作一致的虚拟手心面;
手背面构建单元 122, 用于才艮据与用户手心面动作一致的虚拟手心面, 构建 出与用户手背面动作一致的虚拟手背面。
由于三维手型采集模块只是获取手心面上部分点的空间坐标, 在构建虚拟 手心面时, 手心面构建单元 121 会根据用户手心面上已获取的点的空间坐标来 模拟重现手心面上其它点的坐标, 从而得到完整的手心面。 本发明在上述实施例中所提及的旋转、 拖拽、 缩小、 放大、 点击等操作都 只是作为举例说明, 在本发明的实施例中, 还可以包括其它界面操作。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程, 是可以通过计算机程序来指令相关的硬件来完成, 所述的程序可存储于一计算 机可读取存储介质中, 该程序在执行时, 可包括如上述各方法的实施例的流程。 其中, 所述的存储介质可为磁碟、 光盘、 只读存储记忆体(Read-Only Memory, ROM )或随机存储记忆体(Random Access Memory, RAM )等。
以上所揭露的仅为本发明一种较佳实施例而已, 当然不能以此来限定本发 明之权利范围, 本领域普通技术人员可以理解实现上述实施例的全部或部分流 程, 并依本发明权利要求所作的等同变化, 仍属于发明所涵盖的范围。

Claims

权 利 要 求
1、 一种用户界面控制方法, 其特征在于, 包括:
采集用户手部的三维空间信息;
根据所述采集的三维空间信息, 在用户界面上构建出与用户手部动作一致 的三维虚拟手;
检测所述三维虚拟手对所述用户界面的操作, 并根据所述检测到的操作, 执行相应的功能。
2、 如权利要求 1所述的方法, 其特征在于, 所述采集用户手部动作的三维 空间信息包括:
周期性获取用户的手心面上的点在参考坐标系中的空间坐标。
3、 如权利要求 2所述方法, 其特征在于, 所述根据所述采集的三维空间信 息, 在用户界面上构建出与用户手部动作一致的三维虚拟手包括:
根据用户手心面上的点的三维空间坐标的变化, 构建出与用户手心面动作 一致的虚拟手心面;
根据所述与用户手心面动作一致的虚拟手心面, 构建出与用户手背面动作 一致的虚拟手背面。
4、 如权利要求 1所述的方法, 其特征在于, 所述三维虚拟手对用户界面的 操作包括: 旋转所述用户界面的操作、 拖拽所述用户界面的操作、 缩小所述用 户界面的操作和放大所述用户界面的操作中的至少一项。
5、 如权利要求 4所述的方法, 其特征在于, 所述用户界面还包括: 至少一 个三维式的功能选项;
所述三维虚拟手对用户界面的操作还包括: 用于请求执行所述功能选项所 对应的功能的操作。
6、 一种用户界面的操作控制装置, 其特征在于, 包括:
采集模块, 用于采集用户手部的三维空间信息;
三维虚拟手构建模块, 用于根据所述采集的三维空间信息, 在用户界面上 构建出与用户手部动作一致的三维虚拟手;
处理模块, 用于检测所述三维虚拟手对所述用户界面的操作, 并根据所述 检测到的操作, 执行相应的功能。
7、 如权利要求 6所述的装置, 其特征在于, 所述采集模块包括:
坐标获取单元, 用于周期性获取用户的手心面上的点在参考坐标系中的空 间坐标。
8、如权利要求 7所述的装置, 其特征在于, 所述三维虚拟手构建模块包括: 手心面构建单元, 用于根据用户手心面上的点的三维空间坐标的变化, 构 建出与用户手心面动作一致的虚拟手心面;
手背面构建单元, 用于根据所述与用户手心面动作一致的虚拟手心面, 构 建出与用户手背面动作一致的虚拟手背面。
9、 如权利要求 6所述的装置, 其特征在于, 所述三维虚拟手对用户界面的 操作包括: 旋转所述用户界面的操作、 拖拽所述用户界面的操作、 缩小所述用 户界面的操作和放大所述用户界面的操作中的至少一项。
10、 如权利要求 9所述的装置, 其特征在于, 所述用户界面还包括: 至少 一个三维式的功能选项;
所述三维虚拟手对用户界面的操作还包括: 用于请求执行所述功能选项所 对应的功能的操作。
PCT/CN2012/086000 2012-04-06 2012-12-06 一种用户界面的控制方法及装置 WO2013149475A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210098842.6A CN102650906B (zh) 2012-04-06 2012-04-06 一种用户界面的控制方法及装置
CN201210098842.6 2012-04-06

Publications (1)

Publication Number Publication Date
WO2013149475A1 true WO2013149475A1 (zh) 2013-10-10

Family

ID=46692919

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/086000 WO2013149475A1 (zh) 2012-04-06 2012-12-06 一种用户界面的控制方法及装置

Country Status (2)

Country Link
CN (1) CN102650906B (zh)
WO (1) WO2013149475A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102650906B (zh) * 2012-04-06 2015-11-04 深圳创维数字技术有限公司 一种用户界面的控制方法及装置
CN103150022B (zh) * 2013-03-25 2016-09-21 深圳泰山体育科技股份有限公司 手势识别方法及装置
CN105353873B (zh) * 2015-11-02 2019-03-15 深圳奥比中光科技有限公司 基于三维显示的手势操控方法和系统
CN105929933A (zh) * 2015-12-22 2016-09-07 北京蚁视科技有限公司 一种用于三维显示环境中的交互识别方法
DE102016206142A1 (de) * 2016-04-13 2017-10-19 Volkswagen Aktiengesellschaft Anwenderschnittstelle, Fortbewegungsmittel und Verfahren zur Erkennung einer Hand eines Anwenders
CN106095107A (zh) * 2016-06-21 2016-11-09 南京邮电大学 一种应用于智能移动轮椅的手势交互控制方法
CN109644181B (zh) * 2017-12-29 2021-02-26 腾讯科技(深圳)有限公司 一种多媒体信息分享的方法、相关装置及系统
CN111885406A (zh) * 2020-07-30 2020-11-03 深圳创维-Rgb电子有限公司 智能电视控制方法、装置、可旋转电视和可读存储介质
CN113419636B (zh) * 2021-08-23 2021-11-30 北京航空航天大学 虚拟维修中手势识别方法及工具自动匹配方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236454A (zh) * 2007-01-30 2008-08-06 丰田自动车株式会社 操作装置
CN102184342A (zh) * 2011-06-15 2011-09-14 青岛科技大学 一种虚实融合的手功能康复训练系统及方法
CN102350700A (zh) * 2011-09-19 2012-02-15 华南理工大学 一种基于视觉的机器人控制方法
CN102650906A (zh) * 2012-04-06 2012-08-29 深圳创维数字技术股份有限公司 一种用户界面的控制方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236454A (zh) * 2007-01-30 2008-08-06 丰田自动车株式会社 操作装置
CN102184342A (zh) * 2011-06-15 2011-09-14 青岛科技大学 一种虚实融合的手功能康复训练系统及方法
CN102350700A (zh) * 2011-09-19 2012-02-15 华南理工大学 一种基于视觉的机器人控制方法
CN102650906A (zh) * 2012-04-06 2012-08-29 深圳创维数字技术股份有限公司 一种用户界面的控制方法及装置

Also Published As

Publication number Publication date
CN102650906A (zh) 2012-08-29
CN102650906B (zh) 2015-11-04

Similar Documents

Publication Publication Date Title
WO2013149475A1 (zh) 一种用户界面的控制方法及装置
KR101522991B1 (ko) 조작입력장치 및 방법, 그리고 프로그램
KR101844390B1 (ko) 사용자 인터페이스 제어를 위한 시스템 및 기법
CN103336575B (zh) 一种人机交互的智能眼镜系统及交互方法
JP6057396B2 (ja) 3次元ユーザインタフェース装置及び3次元操作処理方法
CN102662577B (zh) 一种基于三维显示的光标操作方法及移动终端
Song et al. GaFinC: Gaze and Finger Control interface for 3D model manipulation in CAD application
CN110502104A (zh) 利用深度传感器进行无接触操作的装置
CN107357428A (zh) 基于手势识别的人机交互方法及装置、系统
CN109145802B (zh) 基于Kinect的多人手势人机交互方法及装置
EP3021206B1 (en) Method and device for refocusing multiple depth intervals, and electronic device
JP6618276B2 (ja) 情報処理装置、その制御方法、プログラム、及び記憶媒体
KR20120126508A (ko) 포인터를 사용하지 않는 가상 터치 장치에서의 터치 인식 방법
CN112527112B (zh) 一种多通道沉浸式流场可视化人机交互方法
Bai et al. Free-hand interaction for handheld augmented reality using an RGB-depth camera
Hernoux et al. A seamless solution for 3D real-time interaction: design and evaluation
JP5863984B2 (ja) ユーザインタフェース装置及びユーザインタフェース方法
WO2023227072A1 (zh) 在虚拟现实场景中确定虚拟光标方法、装置、设备和介质
CN109669542B (zh) 一种基于回溯指点交互历史的射线投射三维目标选取方法
WO2018076609A1 (zh) 一种操作终端的方法和终端
KR20160055407A (ko) 홀로그래피 터치 방법 및 프로젝터 터치 방법
Tuntakurn et al. Natural interaction on 3D medical image viewer software
CN114327063A (zh) 目标虚拟对象的交互方法、装置、电子设备及存储介质
WO2013149476A1 (zh) 一种用户界面的操作控制方法及装置
KR20200120467A (ko) Hmd 장치 및 그 동작 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12873524

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 04/03/2015)

122 Ep: pct application non-entry in european phase

Ref document number: 12873524

Country of ref document: EP

Kind code of ref document: A1