WO2013139181A1 - User interaction system and method - Google Patents

User interaction system and method Download PDF

Info

Publication number
WO2013139181A1
WO2013139181A1 PCT/CN2013/070608 CN2013070608W WO2013139181A1 WO 2013139181 A1 WO2013139181 A1 WO 2013139181A1 CN 2013070608 W CN2013070608 W CN 2013070608W WO 2013139181 A1 WO2013139181 A1 WO 2013139181A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
dimensional
interface
processing unit
Prior art date
Application number
PCT/CN2013/070608
Other languages
French (fr)
Chinese (zh)
Inventor
刘广松
Original Assignee
乾行讯科(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乾行讯科(北京)科技有限公司 filed Critical 乾行讯科(北京)科技有限公司
Publication of WO2013139181A1 publication Critical patent/WO2013139181A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present invention relates to the field of electronic application technologies, and in particular, to a user interaction system and method. Background technique
  • Virtual reality technology uses computer simulation to generate a virtual world in a three-dimensional space, providing users with simulations of visual, auditory, tactile and other senses, allowing users to observe 3D in a timely and unrestricted manner as if they were immersive. Things in space, and virtual worlds The elements in the world interact. Virtual reality technology has a virtual reality that transcends reality. It is a new computer technology developed with multimedia technology. It uses 3D graphics generation technology, multi-sensor interaction technology and high-resolution display technology to generate 3D realistic virtual environment.
  • an embodiment of the present invention provides a user interaction system to increase user experience.
  • the embodiment of the invention also proposes a user interaction method to enhance the user experience.
  • a user interaction system includes an information operation processing unit, a three-dimensional display unit information operation processing unit, for providing a three-dimensional stereoscopic display unit with a three-dimensional stereoscopic display signal, and a three-dimensional display unit for The three-dimensional interface display signal displays a three-dimensional interface to the household;
  • a motion capture unit configured to capture a body space movement information made by the user browsing the three-dimensional interface, and send the limb space movement information to an information operation processing unit;
  • the information operation processing unit is further configured to determine an interaction command corresponding to the movement information of the user's limb space, and provide a three-dimensional stereoscopic display unit with a corresponding three-dimensional interface display signal corresponding to the execution of the interactive operation command.
  • the information operation processing unit is a mobile terminal, a computer or a cloud computing-based information service platform.
  • the system further includes a viewing angle sensing unit worn on the user's head;
  • a view sensing unit configured to sense user head motion information, and send the user head motion information to an information operation processing unit;
  • An information operation processing unit further configured to determine a user's real-time view according to the user's head motion information Angle information, and real-time display to the three-dimensional display unit based on the three-dimensional interface display message number of the user in real-time view.
  • the system further includes a sound processing unit
  • a sound processing unit configured to capture voice input information of the user, send the voice input information to the information operation processing unit; and play the voice play information provided by the information operation processing unit to the user;
  • the information operation processing unit is further used for Determining a voice operation command according to the voice input information, and providing a three-dimensional stereo interface unit with a three-dimensional stereo interface signal for performing the voice operation; and providing the sound processing unit with a voice play related to the three-dimensional interface display signal information.
  • the three-dimensional stereoscopic display unit is a head mounted device.
  • the three-dimensional display unit and the perspective-aware unit are physically integrated as a portable user wearable unit.
  • the three-dimensional display unit, the angle-of-view sensing unit, and the sound processing unit are physically integrated into a portable, wearable whole.
  • the three-dimensional stereoscopic display unit, the viewing angle sensing unit, and the motion capture unit are physically integrated into a portable user-capable device or a casual wearable device.
  • the three-dimensional stereoscopic display unit, the viewing angle sensing unit, the sound processing unit, and the motion capture unit are physically integrated into a portable user wearable device or a portable wearable device.
  • the motion capture unit is a portable wearable device, or is fixed at a location where the user can capture the user action.
  • the information operation processing unit is further configured to display a spatial virtual pointer element corresponding to the user's hand on the three-dimensional interface;
  • a motion capture unit configured to capture, in real time, user hand position shape information that is generated by the user in response to browsing the three-dimensional interface
  • An information operation processing unit configured to determine an interaction operation command corresponding to the user hand position pattern information according to the user hand position pattern information, and output an image signal of the space virtual pointer element in real time, Therefore, the motion trajectory of the spatial virtual pointer element on the three-dimensional interface is consistent with the trajectory of the user's hand, and the user is presented with a three-dimensional interface display signal for performing the interactive command corresponding to the position information of the user's hand in real time.
  • a user interaction method comprising:
  • An interactive operation command corresponding to the movement information of the user's limb space is determined, and a three-dimensional interface corresponding to the hand-held, multi-digit, and the '_ command is displayed in real time.
  • the information about the movement of the limb space made by the user to browse the three-dimensional interface is: capturing a precise positioning operation and/or an inaccurate positioning operation made by the user browsing the three-dimensional interface.
  • the precise positioning operation comprises: a user hand movement control space virtual pointer element freely moving in a three-dimensional direction in the three-dimensional stereo interface; recognizing two different states of the user hand and a spatial change of the corresponding hand when the state changes The position in the three-dimensional interface, wherein the two states of the hand include a "fist state" and a state in which only the index finger is extended; a button of the three-dimensional interface is clicked; or a specific area on the three-dimensional interface is selected.
  • the precise positioning operation includes: hand hovering, hand swiping from right to left, hand moving from left to right 3 ⁇ 4, hand swiping from top to bottom, hand swiping from bottom to top, or hands apart or gathering, or Wave your hand.
  • the method further includes the step of initially obtaining an initial correction setting of the user interaction habit.
  • the method further includes:
  • the three-dimensional interface display signal of the user in real-time view is provided in real time.
  • the method further includes:
  • a novel user interaction system and method is proposed.
  • the user can be immersed in a private and interesting virtual information interaction space, and perform natural information interaction in the space.
  • Many of the products developed based on embodiments of the present invention will be consumer electronics products with market competitiveness.
  • the unique natural interaction solution of the embodiment of the present invention will promote the development of products and applications in the field of consumer-level virtual reality and augmented reality, greatly improve the user's interactive experience, and can generate a series of meaningful applications, recognizing greatly Enhanced user experience.
  • the embodiment of the present invention proposes a natural interaction solution integrating voice interaction, gesture interaction, natural angle transformation and the like.
  • users can naturally interact with elements in the 3D virtual information natural interaction space interface to obtain information or entertain, creating an immersive, unique and attractive user experience.
  • an embodiment of the present invention provides a three-dimensional virtual information natural interaction interface of a natural interaction technology, and the interaction interface includes a plurality of three-dimensional solid elements that can perform natural interaction.
  • the interface is designed to provide interactive feedback to the user in real time through sound, interactive changes in light and shadow, etc., enhancing the user's natural interaction fun and experience.
  • the user can naturally control the virtual pointer corresponding to the user's hand in the natural interaction interface of the three-dimensional virtual information, and naturally interact with the natural interactive interface of the three-dimensional virtual information.
  • the embodiments of the present invention can be applied to any display device and an interactive interface.
  • the solution of adding a pointer corresponding to the user's hand in real time on the interactive interface can facilitate the user to perform a series of precise touch interaction operations.
  • this kind of interaction is very natural, conforms to the basic gesture interaction mode of human nature, and reduces the learning cost of the user on the operating device. This interaction is in line with the natural interaction between the human body and the portable information processing hardware device, enabling people to concentrate more on their concerns. Information rather than the hardware device itself.
  • FIG. 1 is a schematic structural diagram of a weekly interaction system according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a user interaction method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a three-dimensional virtual information natural interaction interface according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a three-dimensional virtual information natural interaction interface according to another embodiment of the present invention.
  • a complete virtual reality system can be composed of virtual environment, virtual environment processor with high-performance computer as the core, vision system with helmet display as the core, auditory system with speech recognition sound synthesis and sound localization as the core, and azimuth tracking.
  • the device, the data glove and the data clothing are the main body position and posture tracking devices, and the functional units such as taste, smell, touch and force feedback system.
  • FIG. 1 is a schematic structural diagram of a user interaction system according to an embodiment of the present invention.
  • the system includes an information operation processing unit 101, a three-dimensional stereoscopic display unit 102, and a motion capture unit 103, wherein:
  • An information operation processing unit 101 configured to provide a three-dimensional stereoscopic display unit to the three-dimensional display unit 1 ;
  • the three-dimensional display unit 102 is configured to display a three-dimensional interface to the user according to the three-dimensional interface display signal;
  • the action capture unit 103 is configured to capture the body space movement information made by the user browsing the three-dimensional interface, and send the limb space movement information to the information operation processing unit 101;
  • the information operation processing unit 101 is further configured to determine an interactive operation command corresponding to the user's limb space movement information, and provide the three-dimensional stereoscopic display unit 102 with a three-dimensional stereo interface display signal corresponding to the execution of the interactive operation command in real time.
  • the system further includes a perspective perception unit 104 that is worn on the user's head.
  • the view sensing unit 104 is configured to sense user head motion information, and send the user head motion information to the information operation processing unit 101;
  • the information operation processing unit 101 is further configured to determine a real-time viewing angle of the user according to the user's head motion information, and provide a three-dimensional stereoscopic interface display signal based on the real-time viewing angle of the user to the three-dimensional stereoscopic display unit 102 in real time.
  • the system further includes a sound processing unit 105.
  • the sound processing unit 105 is configured to capture voice input information of the user, send the voice input information to the information operation processing unit 101, and play the voice play information provided by the information operation processing unit 101 to the user;
  • the information processing unit 101 is further configured to input information to determine the voice according to the voice operation command to the three-dimensional display unit 102 provides interface signals performs three-dimensional operation of the voice command; and a voice processing unit to: provide 105 The three-dimensional interface displays signal related Voice playback information.
  • the three-dimensional display unit 102 can be embodied as a head mounted device, such as a glasses-type three-dimensional display device.
  • the glasses-type three-dimensional display device forms a parallax by the user's eyes by controlling the slight difference in the screen displayed by the micro-displays corresponding to the left and right eyes, and the brain interprets the parallax of both eyes and judges the object's distance and stereoscopic vision.
  • the information operation processing unit 1 01 may be any device capable of providing a three-dimensional interface display signal.
  • the information operation processing unit 1 Q1 can be derived from any information acquisition device such as a mobile terminal, a computer, or a cloud-based information service platform.
  • the information operation processing unit 01 can perform a certain operation (such as mobile phone dialing, browsing webpage, etc.) by processing the corresponding interactive processing command through its built-in operating system, and update the corresponding three-dimensional stereo interface display signal in real time through wired or wireless mode, and output the three-dimensional image.
  • the stereoscopic interface display signal is displayed to the three-dimensional display unit 102.
  • the communication manner between the information operation processing unit 101 and the three-dimensional stereoscopic display unit 102 can be implemented in various specific forms, including but not limited to: wireless broadband transmission, wif i transmission, Bluetooth transmission, infrared transmission, and mobile Communication transmission, USB transmission or wired transmission, etc.
  • the communication manner between the information operation processing unit 101 and the action capture unit 103, the view perception unit 104 or the sound processing unit 105 may also have various specific implementation forms, including but not limited to: wireless broadband transmission, wi Fi transmission, Bluetooth transmission, infrared transmission, mobile communication transmission, USB transmission or wired transmission, and the like.
  • the sound processing unit 1 05 may include an array sound collection sensor, a speaker module, and a data transmission module.
  • the sound processing unit 105 may capture the voice information input by the user by using the sound collection sensor, and transmit the captured user voice data information to the information operation processing unit 101 to perform further recognition processing by the information operation processing unit 101; the sound processing unit 1 05 is also used to receive and process various speech signals from the information operation processing unit 101 to provide various audible feedback to the user.
  • the motion capture unit 103 may include an optical depth sensor and a data transmission module.
  • the depth image of the user's hands or one hand is obtained in real time by the optical depth sensor, and these depth images are transmitted to the information operation processing unit 101 in real time through the data transmission module, and the information operation processing unit is
  • the viewing angle sensing unit 04 may include a micro electronic sensor such as a gyroscope, an accelerometer, an electronic compass, and a data transmission module.
  • the view sensing unit 104 can be fixedly worn on the user's head for sensing the user's head motion information, and transmitting the corresponding data information to the information operation processing unit 101 through the data transmission module, and further analyzing and obtaining the user by the information operation processing unit 101. Real-time viewing direction and change information.
  • the information operation processing unit 101 receives relevant data information provided in real time by the processing sound processing unit 105, the motion capturing unit 103, and the angle sensing unit 104, and updates in real time to provide a three-dimensional interface display signal to the three-dimensional display unit 102.
  • the information operation processing unit 101 has corresponding computing capabilities and is capable of communicating with other units.
  • the information operation processing unit 101 receives and analyzes the data information from the units such as the sound processing unit 105, the motion capture unit 103, and the view perception unit 104, analyzes the user interaction intention in real time, and combines the three-dimensional natural interaction interface exclusive to the system to utilize the three-dimensional graphics.
  • the generating technology real-time renders the current user perspective and the three-dimensional virtual environment updated in real time under the natural interaction operation, and converts the three-dimensional stereoscopic display signal into the three-dimensional stereoscopic display unit 102 in real time.
  • the three-dimensional display unit 102 can be a portable head mounted device; and the view sensing unit 104 can be a portable head mounted device.
  • the three-dimensional stereoscopic display unit 102 and the viewing angle sensing unit 104 can also be integrated into a portable head mounted device.
  • the three-dimensional stereoscopic display unit 102, the viewing angle sensing unit 104, and the sound processing unit 105 may be integrated into one portable head mounted device or wearable device.
  • the three-dimensional stereoscopic display unit 102, the perspective perception unit 104, and the gesture recognition module may be The block is integrated into a portable head mounted device or wearable device;
  • the three-dimensional stereoscopic display unit 102, the viewing angle sensing unit 104, the motion capturing unit 103, and the sound processing unit 105 may be integrated into a portable head mounted device or a wearable device.
  • the motion capture unit -03 may be a portable wearable device or fixed at the user's external location to capture the user action location.
  • the motion capture unit 103 can be worn on the chest, even on the head (e.g., glasses), etc., to facilitate capturing human motion from the ⁇ .
  • the user when the user wears the three-dimensional display unit 102 and the view perception unit 104, and is connected with the information operation processing unit 101 and other units, the user can feel that he has entered a natural interactive virtual three-dimensional information space. .
  • FIG. 3 is a schematic diagram of a three-dimensional virtual virtual information natural interaction interface according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a three-dimensional virtual information natural interaction interface according to another embodiment of the present invention.
  • the three-dimensional information space can contain a lot of user-defined content. For example, it may include a user's virtual pet, a user's favorite news section, a user's mail, and the like. All virtual environments are rendered in real time by the information processing unit 1 01 using 3D graphics generation technology, and presented to the user in a three-dimensional dynamic form.
  • the sensor in the view sensing unit 104 acquires the processing related data information in real time, and transmits the quantized measurement result of the user's real-time action state to the information operation processing unit 101.
  • the information operation processing unit 101 further analyzes and processes the real-time viewing angle information of the user, and the obtained real-time viewing angle information is used for real-time rendering of the three-dimensional virtual environment in a new viewing angle state, so that the user can perform the action of changing the viewing angle such as looking up or turning the head.
  • the virtual environment that is seen can be adjusted in real time according to the perspective change of the household, so that the user can feel that in the virtual space, the content of the corresponding viewing direction of the virtual space can be naturally viewed through the action of changing the head, such as a head-turning head.
  • the feeling of head-turning in the virtual space is as close as possible to the feeling of making corresponding actions in real space.
  • Users can also interact with the three-dimensional virtual space environment they see through voice. For example, when the user says "menu”, the voice signal is collected and analyzed by the array microphone of the sound processing unit 105 as a data signal, and transmitted to the information operation processing unit 101 for analysis.
  • the statistical matching analysis algorithm analyzes that the user's interaction intention is
  • the interactive menu is opened, and the information operation processing unit 101 will control the three-dimensional virtual twisted space interaction interface to wipe the interactive command, and an interactive menu will appear in the user's field of view.
  • the motion capture unit 103 captures and transmits the depth image sequence of the user's hand to the information operation processing unit 101 in real time, and the information operation processing unit 101 performs real-time through a series of software algorithms.
  • the depth image sequence of the weekly hand is analyzed to obtain the movement trajectory of the user's hand, and the user's gesture interaction intention is determined by the analysis.
  • the corresponding virtual pointer in the user's field of view corresponds to the motion and position of the user's hand, and the user is fed back with corresponding gestures.
  • the following is a description of the system workflow by taking the user's hand up, moving, and clicking actions as an example.
  • the motion capture unit 103 acquires the depth image sequence of the current field of view in real time and transmits it to the information operation processing unit 101, and the corresponding match recognition in the information operation processing unit 101.
  • the algorithm analyzes the received depth image sequence in real time, analyzes the user's hand and recognizes it as an effective feature tracking object, and uses its depth image data to analyze its three-dimensional position information in real time.
  • the three-dimensional virtual space in the user's field of view A virtual pointer will appear in the environment corresponding to the user's hand, and the user controls the movement of the virtual pointer just like controlling his own hand.
  • the user controls the virtual pointer to move to the virtual button by hand.
  • the virtual pointer shape changes, prompting the user that the position can be clicked.
  • the user makes a click action or the form of the hand changes from the state of extending only the index finger to the state of the fist, and the information operation processing unit 101 analyzes the user's hand from the action capture unit 103.
  • the depth image sequence analyzes the motion trajectory and morphological changes of the user's hand, and analyzes and judges that the user's interaction intention is click and confirm through a series of redundant motion matching algorithms.
  • the information operation processing unit 101 controls the three-dimensional image.
  • the three-dimensional virtual interaction space interface performs the interaction intention, and presents the interaction result after the execution of the interaction intention to the user in real time.
  • various electronic devices such as portable electronic devices
  • the three-dimensional display unit 102 is provided with three-dimensional information by the information operation processing unit 101.
  • the stereoscopic interface displays signals and interacts by recognizing the user's limb space movement information made by the user for the three-dimensional interface.
  • the embodiment of the present invention simultaneously proposes a humanized interaction scheme based on recognition of a human limb motion (preferably a human gesture) for the aforementioned three-dimensional stereo interface, and the interaction scheme can seamlessly integrate the aforementioned three-dimensional interface and the body motion of the human body. Manipulate the information.
  • a stable interactive development platform is developed for developers to develop a wide range of applications.
  • embodiments of the present invention provide an accurate interactive solution.
  • the user can interact with any interactive interface through a touch operation conforming to the natural interaction mode of the human body (
  • the motion capture unit 1 0 3 is selected as an infrared depth imaging sensor device, and the user limb space movement information is an image signal containing depth of field information captured by the infrared depth imaging sensor device.
  • the information operation processing unit 101 receives and analyzes the depth image information of the user's limb movement from the motion capture unit 103, and analyzes the user interaction intention (ie, the interactive operation command) through software algorithm analysis, and provides the three-dimensional display unit in real time. Corresponding to the three-dimensional interface display signal after the execution of the interactive operation command.
  • the information operation processing unit 101 firstly analyzes the real-time position information of the user's limb based on the received real-time image information. And store the user's limb for a certain length of time (than Historical location information such as the hand is for further user interaction intent judgment.
  • the further identified user interaction intent includes a simple moving operation of the user with one or both hands (default), a one-hand or two-handed drag operation or a one- or two-handed click, a stay, a swing operation, and the like.
  • the information operation processing unit 101 is further configured to display a spatial virtual pointer element corresponding to the user's hand on the three-dimensional interface; the action capturing unit 103 is configured to capture the user in response to browsing the three-dimensional interface in real time. And the user's hand position form information is made;
  • the information operation processing unit 101 is configured to determine an interaction operation command corresponding to the user hand position pattern information according to the user hand position pattern information, and output an image signal of the space virtual pointer element in real time, thereby realizing the three-dimensional interface
  • the motion trajectory of the space virtual pointer element is consistent with the trajectory of the weekly hand movement, and provides the user with a three-dimensional interface display signal after performing the interactive operation command corresponding to the position information of the user's hand in real time.
  • the action capture unit 103 records and transmits the image data to the information operation processing unit 101 in real time.
  • the information operation processing unit ⁇ 01 is analyzed from the image data through a series of software algorithms.
  • the user gesture track is swiped from right to left, and then determined by the software algorithm as an interactive command (for example: return to the previous page), Process this command data stream and give feedback to the user.
  • the information operation processing unit (U can recognize a series of interactive commands. For example: “Start interaction / OK / select / click”,, “Move (up and down, left, right front, back)", “ The gesture actions of zooming in, “zooming out”, “rotating”, “exiting/ending the interaction”, etc., are converted into interactive operation commands in real time and correspondingly executed, and then the corresponding interactive display state is output to the user.
  • the limb action corresponds to the "return to the previous page” interactive command.
  • the action capturing unit 103 records and transmits image data to the information operation processing unit 101 in real time.
  • the information operation processing unit 101 analyzes from the image data through a series of software algorithms to draw the user gesture track from right to left, and then determines by the software algorithm that the gesture corresponds to the command of "returning to the previous page", and then executes " Returns the command processing of the previous page, and outputs the display state after the "return to the previous page” is executed.
  • the information operation processing unit 101 has self-learning capability and a certain user-defined extension function, and the user can improve the gesture recognition capability of the system according to his own gesture habits and can customize various operations according to the user's own preferences. Gestures and ways of doing things.
  • the user interaction recognition software presets a lot of parameters, such as human skin color information, arm length information, etc. In the initial case, the initial values of these parameters are based on statistical averages to satisfy most users, and the system is implemented by software algorithms.
  • Self-learning ability that is, as the user continues to use, the software can modify some of the parameters according to the user's own characteristics to make the recognition interaction more inclined to target specific user characteristics, and improve the gesture recognition ability of the system.
  • the user identification interaction software should also provide a user-defined interface, such as a user-specific gesture track representing a user-defined operation command, thereby realizing the system's personalized customizable features.
  • the user's interaction with the three-dimensional interface is divided into two categories: one is to identify inexact positioning operations, such as "page turning”, “forward”, “back”, etc. Another type is to achieve precision. Positioning operations, such as clicking a button in the interactive interface or selecting a specific area.
  • the precise positioning operation may include: a user hand movement control space virtual pointer element freely moving in a three-dimensional direction in the three-dimensional stereo interface; recognizing two different states of the user hand and a space virtual virtual 4 pin element corresponding to the hand when the state changes The position in the three-dimensional interface, wherein the two states of the hand include a state of clenching and a state of extending only the index finger; clicking a button of the three-dimensional interface; or selecting a specific area on the three-dimensional interface.
  • the inaccurate positioning operation may include, for example, a hand swiping from right to left, a hand swiping from left to right, a hand swiping from top to bottom, and a hand swiping from bottom to top, and separating, gathering, etc. .
  • the information operation processing unit 101 analyzes and determines the household.
  • the hand trajectory is intended to derive interactive commands, from 3 ⁇ 4 to achieve accurate interface.
  • the correspondence between the user's gesture and each specific interactive operation command may be preset. Moreover, such correspondence is preferably editable so that new interactivity commands can be added, or gestures corresponding to interworking commands can be changed based on user habits.
  • the technical solution of the present invention will be described below with the purpose of recognizing the user interaction with one-handed click.
  • the user raises the signal acquisition capture range into the motion capture unit 103 with one hand (such as the right hand).
  • the user performs a forward click action according to his own habits, and assumes that the entire click action takes 0.5 seconds, and the motion capture unit 103 transmits the collected image information of the user's hand to the information operation processing unit 101 in real time.
  • the information operation processing unit 101 accepts the image information data transmitted in real time, and stores the history image information data for a certain period of time, assuming that the stored history information data has a duration of ls.
  • the software in the motion capture unit 103 analyzes the image information data of the user's hand in the past one second in real time, and obtains the spatial displacement information of the user's hand in the last one second.
  • the logical algorithm determines that the movement trajectory of the user's hand is in accordance with the simple movement, and the total movement trajectory of the user's hand after the 0.5 second is representative that the probability of the user making the click action is sufficiently high (ie, the probability value conforms to a certain pre- A critical criterion is set) to be recognized as a one-click operation. Therefore, at this moment, the information operation processing unit 101 analyzes and obtains a click interaction, and the user has performed a click operation for 0.5 seconds in the past one second, and a click operation at G. 5 seconds.
  • the click operation interaction intention obtained by this analysis is compiled and transmitted to the display signal source through the communication module. It is worth noting that the position of the user's hand is recognized as the default movement operation within 0.5 seconds before this moment. Therefore, the pointer corresponding to the user's hand on the interactive interface is constantly updated in position.
  • the initial calibration setting process can include:
  • the user is instructed to extend the two hands into the detection area of the motion capture unit through the three-dimensional interactive display interface, and the image recognition of the user's hands is performed, and the relevant shape parameters for identifying the user's hand are established.
  • the user defines the spatial extent of the hand during the interaction operation, for example, respectively, indicating that the user places the hand on the spatial plane corner point and the two points before and after, and determines the spatial extent of the user's hand interaction operation through image sampling analysis.
  • the parameter value is the spatial extent of the hand during the interaction operation, for example, respectively, indicating that the user places the hand on the spatial plane corner point and the two points before and after.
  • the information operation processing unit 101 determines the relative position information of the user hand at each point by the correction setting process transmitted by the action capturing unit 103 to determine the key related parameter in the recognition interaction algorithm, and instructs the user to perform one-handed several times. Or the click operation of the two hands, drag and drop to extract the key parameter information related to the corresponding interaction intention criterion.
  • the initial calibration setting process ends and is saved as a callable information file for storage. The user can directly call the corresponding file in the future.
  • the interaction scheme can satisfactorily satisfy the interaction habits of any user, and provide a personalized and accurate interactive operation experience for different users.
  • the embodiment of the present invention also proposes a user interaction method prolongation.
  • FIG. 2 is a schematic flow chart of a method for interacting with a household according to an embodiment of the present invention.
  • Step 201 Provide a three-dimensional interface display signal.
  • Step 202 Display a three-dimensional interface according to the three-dimensional interface display signal to the user.
  • Step 203 Accomplish the movement information of the limb space made by the user browsing the three-dimensional interface.
  • Step 204 Determine an interaction operation command corresponding to the user's limb space movement information, and provide a three-dimensional interface display signal corresponding to the execution of the interaction operation command in real time.
  • capturing the movement information of the limb space made by the user browsing the three-dimensional interface is specifically: capturing a precise positioning operation and z or non-precision determination of the user on the three-dimensional interface Bit operation.
  • the precise positioning operation may include: clicking a button on the three-dimensional interface or selecting a specific area on the interaction interface
  • the non-precise positioning operation may specifically include: a hand swiping from right to left, a hand swiping from left to right, a hand Swiping from top to bottom, hand swiping from bottom to top, or two hands apart, gathering, and other specific regular gesture trajectories.
  • the method further comprises an initial correction setting step of acquiring user interaction habits in advance.
  • an initial correction setting step of acquiring user interaction habits in advance includes:
  • the user is instructed to extend the Han hand into the detection area of the motion capture unit through the three-dimensional interface, and the image is sampled and recognized by the user's hands, and the relevant shape parameters for identifying the user's hand are established.
  • the user defines the spatial extent of the hand during the interaction, for example, respectively indicating that the user places his hand in each corner of the space (upper left corner, upper right corner, lower left corner, lower right corner, etc.), and before and after
  • the spatial range phase of the interactive operation of the user hand is determined by image analysis, and then the information operation processing unit determines the relative position information of the user hand at each point by analyzing the correction setting process transmitted by the motion capture unit to determine the recognition interaction.
  • the algorithm measures the key parameters related to the scale, and instructs the user to perform the single-hand or two-hand click operation, and the drag operation extracts the key parameter information related to the corresponding interaction intention criterion.
  • the initial calibration setting process ends and is saved as a callable information file for storage. The user can directly call the corresponding file in the future.
  • the method further comprises:
  • the three-dimensional interface display signal based on the real-time view of the user is provided in real time.
  • the method further comprises:
  • a three-dimensional interface signal is provided after the voice command is executed.
  • the information operation processing unit provides a three-dimensional stereoscopic interface display signal to the three-dimensional stereoscopic display unit; the three-dimensional stereoscopic display unit displays a three-dimensional stereoscopic interface to the user according to the three-dimensional stereoscopic interface display signal; and the motion capture unit captures the user browsing the three-dimensional interface.
  • the limb space movement information made by the stereoscopic interface transmits the limb space movement information to the information operation processing unit.
  • the user can be immersed in a private and interesting virtual information interaction space, and natural information interaction is performed in the space.
  • the embodiment of the present invention proposes a natural interaction solution integrating voice interaction, gesture interaction, natural angle transformation and the like.
  • users can naturally interact with elements in the 3D virtual information natural interaction space interface to obtain information or entertain, creating an immersive, unique and attractive user experience.
  • an embodiment of the present invention provides a three-dimensional virtual information natural interaction interface of a natural interaction technology, and the interaction interface includes a plurality of three-dimensional solid elements that can perform natural interaction.
  • the interface is designed to provide interactive feedback to the user in real time through sound, light and shadow changes of interactive elements, and enhance the user's natural interaction fun and experience.
  • the user can naturally control the virtual pointer corresponding to the user's hand in the natural interactive interface of the three-dimensional virtual information, and naturally interact with the natural interactive interface of the three-dimensional virtual information.
  • the embodiments of the present invention can be applied to any display device and an interactive interface.
  • the solution of adding a pointer corresponding to the user's hand in real time on the interactive interface can facilitate the user to perform a series of precise touch interaction operations.
  • this kind of interaction is very natural, conforms to the basic gesture interaction mode of human nature, and reduces the learning cost of the user's equipment. This interaction is in line with the natural interaction between the human body and the portable information processing hardware device, enabling people to concentrate more on the information they care about.
  • the embodiment of the present invention can be applied to any human-machine interaction information device, and its versatility will bring great convenience to people.

Abstract

Disclosed are a user interaction system and method. The system comprises an information operation processing unit which is used for providing a 3D interface display signal for a 3D display unit; the 3D display unit which is used for displaying a 3D interface for a user according to the 3D interface display signal; and an action acquisition unit which is used for acquiring limb spatial movement information generated when the user browses the 3D interface, and sending the limb spatial movement information to the information operation processing unit. After the embodiments of the present invention are applied, the user can be immersed in a private and interesting information interaction space and conduct natural information interaction in this space, thereby enhancing the user's interaction experience. Based on this, a large quantity of meaningful applications can be derived, so that the user interacts with the digital information world more naturally.

Description

技术领域 本发明涉及.电子应用 ( application)技术领域, 特别地, 涉及一种用户 交互系统和方法。 背景技术 TECHNICAL FIELD The present invention relates to the field of electronic application technologies, and in particular, to a user interaction system and method. Background technique
1959年美国学者 B. Shackel首次提出了人机交互工程学的概念。 20世纪 90年代后期以来, 随着高速处理芯片, 多媒体技术和互联网技术的迅速发展 和普及, 人机交互的研究重点放在了智能化交互、 多模态 (多通道) 多媒体 交互. 虚拟交互以及人机协同交互等方面, 也就是放在以人为在中心的人机 随着社会的进步与信息爆炸时代的来临, 人们越来越多依靠各式各样的 消费电子设备(如移动终端、 个人数字助理(PM)等)获取各种信息。 比如: 打电话与别人沟通, 浏览网页获取新闻和查看电子邮件等。 目前广泛应用的 人机交互包括依靠传统的键盘鼠标等硬件设备, 以及近凡年逐渐流行起来的 触摸屏等。 In 1959, American scholar B. Shackel first proposed the concept of human-computer interaction engineering. Since the late 1990s, with the rapid development and popularization of high-speed processing chips, multimedia technology and Internet technology, human-computer interaction research has focused on intelligent interaction, multi-modal (multi-channel) multimedia interaction. Virtual interaction and Human-machine interaction and other aspects, that is, the human-centered human-machine with the advancement of society and the era of information explosion, people rely more and more on a variety of consumer electronic devices (such as mobile terminals, individuals) Digital Assistant (PM), etc.) obtain a variety of information. For example: Call to communicate with others, browse the web to get news and check emails. The widely used human-computer interactions include hardware devices such as traditional keyboard and mouse, and touch screens that have become popular in recent years.
人们对于现存的人机交互方式并不满足, 人们期望新一代的人机交互能 像人与人交互一样自然、 准确和快捷。 于是在 20世纪 90年代人机交互的研 究进到了 多模态阶段, 称为人机自 然交互 ( Human-Computer Nature Interact ion, HCNI sk, Human-Machine Nature Interact ion, HMNI ) 。  People are not satisfied with the existing human-computer interaction methods. It is expected that the new generation of human-computer interaction can be as natural, accurate and fast as human-to-human interaction. So in the 1990s, the study of human-computer interaction entered the multi-modal stage, called Human-Computer Nature Interaction (HCNI sk, Human-Machine Nature Interact ion, HMNI).
虚拟现实 (virtual reality) 技术是利用电脑模拟产生一个三维空间的 虚拟世界, 提供使用者关于视觉、 听觉、 触觉等感官的模拟, 让使用者如同 身历其境一般, 可以及时、 没有限制地观察三维空间内的事物, 并与虛拟世 界中的元素进行交互。 虚拟现实技术具有超越现实的虚拟性。 它是伴随多媒 体技术发展起来的计算机新技术, 它刹用三维图形生成技术、 多传感交互技 术以及高分辨率显示技术, 生成三维逼真的虚拟环境。 Virtual reality technology uses computer simulation to generate a virtual world in a three-dimensional space, providing users with simulations of visual, auditory, tactile and other senses, allowing users to observe 3D in a timely and unrestricted manner as if they were immersive. Things in space, and virtual worlds The elements in the world interact. Virtual reality technology has a virtual reality that transcends reality. It is a new computer technology developed with multimedia technology. It uses 3D graphics generation technology, multi-sensor interaction technology and high-resolution display technology to generate 3D realistic virtual environment.
然而, 如何将虚拟现实技术应用到用户交互的各种应用中, 依然是一个 巨大的挑战。 发明内容 有鉴于此, 本发明实施方式提出一种用户交互系统, 以增加用户体验。 本发明实施方式还提出一种用户交互方法, 以增强用户体验。  However, how to apply virtual reality technology to various applications of user interaction is still a huge challenge. SUMMARY OF THE INVENTION In view of this, an embodiment of the present invention provides a user interaction system to increase user experience. The embodiment of the invention also proposes a user interaction method to enhance the user experience.
本发明技术方案如下:  The technical scheme of the present invention is as follows:
一种用户交互系统, 该系统包括信息运算处理单元、 三维立体显示单元 信息运算处理单元, 用于向三维立体显示单元提.供三维立体界面显示信号; 三维立体显示单元, 用于根据所述三维立体界面显示信号向周户显示三维立 体界面;  A user interaction system, the system includes an information operation processing unit, a three-dimensional display unit information operation processing unit, for providing a three-dimensional stereoscopic display unit with a three-dimensional stereoscopic display signal, and a three-dimensional display unit for The three-dimensional interface display signal displays a three-dimensional interface to the household;
动作捕获单元, 用于捕获用户浏览该三维立体界面做出的肢体空间移动信息, 并将所述肢体空间移动信息发送到信息运算处理单元;  a motion capture unit, configured to capture a body space movement information made by the user browsing the three-dimensional interface, and send the limb space movement information to an information operation processing unit;
信息运算处理单元, 进一步用于确定对应于该用户肢体空间移动信息的交互 搡作命令, 并实时向三维立体显示单元提供对应于执行该交互操作命令后的三维 立体界面显示信号。  The information operation processing unit is further configured to determine an interaction command corresponding to the movement information of the user's limb space, and provide a three-dimensional stereoscopic display unit with a corresponding three-dimensional interface display signal corresponding to the execution of the interactive operation command.
所述信息运算处理单元为移动终端、 计算机或基于云计算的信息服务平台。 该系统进一步包括佩戴于用户头部的视角感知单元;  The information operation processing unit is a mobile terminal, a computer or a cloud computing-based information service platform. The system further includes a viewing angle sensing unit worn on the user's head;
视角感知单元, 用于感测用户头部运动信息, 并将所述用户头部运动信息发 送到信息运算处理单元;  a view sensing unit, configured to sense user head motion information, and send the user head motion information to an information operation processing unit;
信息运算处理单元, 进一步用于根据所述用户头部运动信息确定用户实时视 角信息, 并实时向三维立体显示单元提供基于该用户实时视角下的三维立体界面 显示信 '号。 An information operation processing unit, further configured to determine a user's real-time view according to the user's head motion information Angle information, and real-time display to the three-dimensional display unit based on the three-dimensional interface display message number of the user in real-time view.
该系统进一步包括声音处理单元;  The system further includes a sound processing unit;
声音处理单元, 用于捕获用户的语音输入信息, 将所述语音输入信息发送到 信息运算处理单元; 以及向用户播放由信息运算处理单元提供的语音播放.信息; 信息运算处理单元, 进一步用于根据所述语音输入信息确定语音操作命令, 并向三维立体显示单元提供执行该语音操作命^^后的三维立体界面信号; 以及向 声音处理单元提供与所述三维立体界面显示信号相关的语音播放信息。  a sound processing unit, configured to capture voice input information of the user, send the voice input information to the information operation processing unit; and play the voice play information provided by the information operation processing unit to the user; the information operation processing unit is further used for Determining a voice operation command according to the voice input information, and providing a three-dimensional stereo interface unit with a three-dimensional stereo interface signal for performing the voice operation; and providing the sound processing unit with a voice play related to the three-dimensional interface display signal information.
所述三维立体显示单元为头戴式设备。  The three-dimensional stereoscopic display unit is a head mounted device.
所述三维立体显示单元和视角感知单元在物理上集成为便携式用户可佩戴 整体。  The three-dimensional display unit and the perspective-aware unit are physically integrated as a portable user wearable unit.
所述三维立体显示单元、 视角感知单元和声音处理单元在物理上集成为便携 式周户可佩戴整体。  The three-dimensional display unit, the angle-of-view sensing unit, and the sound processing unit are physically integrated into a portable, wearable whole.
所述三维立体显示单元、 视角感知单元和动作捕获单元在物理上集成为便携 式用户可头戴设备或便 式可穿戴设备。  The three-dimensional stereoscopic display unit, the viewing angle sensing unit, and the motion capture unit are physically integrated into a portable user-capable device or a casual wearable device.
所述三维立体显示单元、 视角感知单元、 声音处理单元和动作捕获单元在物 理上集成为便携式用户可头戴设备或便携式可穿戴设备。  The three-dimensional stereoscopic display unit, the viewing angle sensing unit, the sound processing unit, and the motion capture unit are physically integrated into a portable user wearable device or a portable wearable device.
所述动作捕获单元为便携式穿戴式设^ ·, 或固定于用户体外可捕获用户动作 位置处。  The motion capture unit is a portable wearable device, or is fixed at a location where the user can capture the user action.
所述信息运算处理单元, 进一步用于在所述三维立体界面上显示对应于用户 手的空间虛拟指针元素;  The information operation processing unit is further configured to display a spatial virtual pointer element corresponding to the user's hand on the three-dimensional interface;
动作捕获单元, 用于实时捕获用户响应于浏览该三维立体界面而做出的用户 手位置形态信息;  a motion capture unit, configured to capture, in real time, user hand position shape information that is generated by the user in response to browsing the three-dimensional interface;
信息运算处理单元, 用于根据所述用户手位置形态信息, 确定对应于该用户 手位置形态信息的交互操作命令, 并实时地输出空间虛拟指针元素的图像信号, 从而实现三维立体界面上的空间虚拟指针元素的运动轨迹与用户手运动轨迹保 持一致, 并实时向用户提.供执行该对应于用户手位置形态信息的交互搡作命令后 的三维立体界面显示信号。 An information operation processing unit, configured to determine an interaction operation command corresponding to the user hand position pattern information according to the user hand position pattern information, and output an image signal of the space virtual pointer element in real time, Therefore, the motion trajectory of the spatial virtual pointer element on the three-dimensional interface is consistent with the trajectory of the user's hand, and the user is presented with a three-dimensional interface display signal for performing the interactive command corresponding to the position information of the user's hand in real time. .
一种用户交互方法 , 该方法包括:  A user interaction method, the method comprising:
提供三维立体界面显示信号;  Providing a three-dimensional interface display signal;
根据所述三维立体界面显示信号向用户显示三维立体界面;  Displaying a three-dimensional stereo interface to the user according to the three-dimensional interface display signal;
捕获用户浏览该三维立体界面做出的肢.体空间移动信息;  Capturing the user's browsing of the three-dimensional interface to make the body space movement information;
确定对应于该用户肢体空间移动信息的交互操作命令, 并实时提供对应于执 手 f该交.互 ·!乘作' _命令后的三维立 界面 ϋ示 4言号。  An interactive operation command corresponding to the movement information of the user's limb space is determined, and a three-dimensional interface corresponding to the hand-held, multi-digit, and the '_ command is displayed in real time.
所述捕获用户浏览该三维立体界面做出的肢体空间移动信息为: 捕获用户浏 览该三维立体界面而做出的精确定位操作和 /或非精确定位操作。  The information about the movement of the limb space made by the user to browse the three-dimensional interface is: capturing a precise positioning operation and/or an inaccurate positioning operation made by the user browsing the three-dimensional interface.
所述精确定位操作包括: 用户手移动控制空间虚拟指针元素在三维立体界面 中的三维方向上自由移动; 识別出用户手的两种不同状态以及状态变化时对应手 的空间虚拟指 4十元素在该三维立体界面中的位置, 其中所述手的两种状态包^ "握 拳状态与只伸出食指的状态; 点击三维立体界面的按钮; 或选^ ^三维立体界面上 的特定区域。  The precise positioning operation comprises: a user hand movement control space virtual pointer element freely moving in a three-dimensional direction in the three-dimensional stereo interface; recognizing two different states of the user hand and a spatial change of the corresponding hand when the state changes The position in the three-dimensional interface, wherein the two states of the hand include a "fist state" and a state in which only the index finger is extended; a button of the three-dimensional interface is clicked; or a specific area on the three-dimensional interface is selected.
所述 精确定位搡作包括: 手悬停、 手从右向左划动、 手从左向右 ¾动、 手 从上到下划动、 手从下到上划动或两手分开或聚拢, 或摆手。  The precise positioning operation includes: hand hovering, hand swiping from right to left, hand moving from left to right 3⁄4, hand swiping from top to bottom, hand swiping from bottom to top, or hands apart or gathering, or Wave your hand.
该方法进一步包括预先获取用户交互习惯的初始校正设定步骤。  The method further includes the step of initially obtaining an initial correction setting of the user interaction habit.
该方法进一步包括:  The method further includes:
感测用户头部运动信息;  Sensing user head motion information;
根据所述用户头部运动信息确定用户实时视角;  Determining a real-time perspective of the user according to the user's head motion information;
实时提供该用户实时视角下的三维立体界面显示信号。  The three-dimensional interface display signal of the user in real-time view is provided in real time.
该方法进一步包括:  The method further includes:
捕获用户的语音输入信息; 根据所述语音输入信息确定语音操作命令; Capturing the user's voice input information; Determining a voice operation command according to the voice input information;
提供执行该语音操,作命令后的三维立体界面信号。  Providing a three-dimensional interface signal after performing the voice operation as a command.
从上述技术方案中可以看出, 在本发明实施方式中, 提出了一种新颖的 用户交互系统和方法。 应用本发明实施方式以后, 用户可以沉浸在一个私密 和趣味性的虚拟信息交互空间中, 并在此空间中进行自然的信息交互。 基于 本发明实施方式开发的很多产品将会是具有市场竟争力的消费电子产品。 本 发明实施方式独特的自然交互解决方案将促进消费级虚拟现实与增强现实相 关领域产品与应用的发展, 极大地提高用户的交互体验, 并且能够催生出一 系列有意义的应用, 认¾极大地增强了用户体验。  As can be seen from the above technical solutions, in the embodiments of the present invention, a novel user interaction system and method is proposed. After applying the embodiment of the present invention, the user can be immersed in a private and interesting virtual information interaction space, and perform natural information interaction in the space. Many of the products developed based on embodiments of the present invention will be consumer electronics products with market competitiveness. The unique natural interaction solution of the embodiment of the present invention will promote the development of products and applications in the field of consumer-level virtual reality and augmented reality, greatly improve the user's interactive experience, and can generate a series of meaningful applications, recognizing greatly Enhanced user experience.
而且, 本发明实施方式提出了一个集成了语音交互、 手势交互、 自然视 角变换等方式的自然交互解决方案。 通过该解决方案用户可对三维立体虛拟 信息自然交互空间界面中的元素进行自然的交互, 以获取信息或进行娱乐, 打造一种沉浸的、 独特的、 有吸引力的用户体验。  Moreover, the embodiment of the present invention proposes a natural interaction solution integrating voice interaction, gesture interaction, natural angle transformation and the like. With this solution, users can naturally interact with elements in the 3D virtual information natural interaction space interface to obtain information or entertain, creating an immersive, unique and attractive user experience.
另外, 本发明实施方式提出了一种自然交互技术的三维立体虚拟信息自 然交互界面, 该交互界面包含众多三维立体可进行自然交互的元素。 在交互 过程中, 该界面被设计为通过声音、 交互元素的光影变化等方式实时给用户 提供交互反馈, 增强用户的自然交互乐趣和体验。 通过本发明实施方式所提 出的解决方案, 用户可以自然地用手控制上述三维立体虚拟信息自然交互界 面中对应于用户手的虚拟指针 , 对三维立体虚拟信息自然交互界面进行自然 交互》  In addition, an embodiment of the present invention provides a three-dimensional virtual information natural interaction interface of a natural interaction technology, and the interaction interface includes a plurality of three-dimensional solid elements that can perform natural interaction. During the interaction process, the interface is designed to provide interactive feedback to the user in real time through sound, interactive changes in light and shadow, etc., enhancing the user's natural interaction fun and experience. With the solution proposed by the embodiment of the present invention, the user can naturally control the virtual pointer corresponding to the user's hand in the natural interaction interface of the three-dimensional virtual information, and naturally interact with the natural interactive interface of the three-dimensional virtual information.
还有, 本发明实施方式可用于任意的显示设备以及交互界面, 在交互界 面上加入与用户手实时对应指针的方案可以方便用户进行一系列精确触摸交 互操作。 而且, 这种交互方式非常自然, 符合人性的基本手势交互模式, 而 且降低了用户对操作设备的学习成本。 这种交互方式符合人体自然地交互操 控与便携信息处理硬件设备的分体设计, 使人能够更集中精力于其所关注的 信息而不是硬件设备本身。 In addition, the embodiments of the present invention can be applied to any display device and an interactive interface. The solution of adding a pointer corresponding to the user's hand in real time on the interactive interface can facilitate the user to perform a series of precise touch interaction operations. Moreover, this kind of interaction is very natural, conforms to the basic gesture interaction mode of human nature, and reduces the learning cost of the user on the operating device. This interaction is in line with the natural interaction between the human body and the portable information processing hardware device, enabling people to concentrate more on their concerns. Information rather than the hardware device itself.
不仅于此, 本发明实施方式可以应用与任何人机交互信息设备, 其通用 性将给人们带来极大便利。 附图说明 图 1为根据本发明实施方式的周户交互系统结构示意图;  Not only that, the embodiment of the present invention can be applied to any human-machine interaction information device, and its versatility will bring great convenience to people. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a schematic structural diagram of a weekly interaction system according to an embodiment of the present invention;
图 2为根据本发明实施方式的用户交互方法流程示意图;  2 is a schematic flowchart of a user interaction method according to an embodiment of the present invention;
图 3为根据本发明实施方式的三维立体虚拟信息自然交互界面示意图; 图 4 为根据本发明另一实施方式的三维立体虛拟信息自然交互界面示意 图。 具体实施方式 为使本发明实施方式的目的 技术方案和优点表达得更加清楚明白, 下 面结合附图及具体实施例对本发明实施方式再作进一步详细的说明  3 is a schematic diagram of a three-dimensional virtual information natural interaction interface according to an embodiment of the present invention; FIG. 4 is a schematic diagram of a three-dimensional virtual information natural interaction interface according to another embodiment of the present invention. DETAILED DESCRIPTION OF THE EMBODIMENTS In order to make the technical solutions and advantages of the embodiments of the present invention more clearly, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
一般《¾言, 用户需要通过特殊的交互设备才能进入虚拟环境中。  In general, users need to enter a virtual environment through special interactive devices.
一个完整的虚拟现实系统可以由虚拟环境、 以高性能计算机为核心的虚 拟环境处理器、 以头盔显示器为核心的视觉系统、 以语音识別 声音合成与 声音定位为核心的听觉系统、 以方位跟踪器、 数据手套和数据衣为主体的身 体方位姿态跟踪设备, 以及味觉、 嗅觉、 触觉与力觉反馈系统等功能单元构 成。  A complete virtual reality system can be composed of virtual environment, virtual environment processor with high-performance computer as the core, vision system with helmet display as the core, auditory system with speech recognition sound synthesis and sound localization as the core, and azimuth tracking. The device, the data glove and the data clothing are the main body position and posture tracking devices, and the functional units such as taste, smell, touch and force feedback system.
在一个实施方式中, 本发明实施方式提出了一套全新的、 完整的消费级 虚拟现实设备解决方案。 通过该解决方案, 用户可以借助一种头戴三维立体 显示器及可提供自然交互的相关传感设备, 实现用户通过第一人称视角完全 地沉浸于一个全新的沉浸式三维立体人机自然交互界面, 并与之进行包括语 音、 手势、 以及头部视角的变化等自然的信息交互。 图 1为根据本发明实施方式的用户交互系统结构示意图。 In one embodiment, embodiments of the present invention propose a new and complete consumer-grade virtual reality device solution. With this solution, users can fully immerse themselves in a new immersive 3D human-machine interaction interface through a first-person perspective with a 3D stereoscopic display and related sensing devices that provide natural interaction. Natural information interactions such as changes in speech, gestures, and head angles are performed. FIG. 1 is a schematic structural diagram of a user interaction system according to an embodiment of the present invention.
如图 1所示, 该系统包括信息运算处理单元 101、 三维立体显示单元 102 和动作捕获单元 103, 其中:  As shown in FIG. 1, the system includes an information operation processing unit 101, a three-dimensional stereoscopic display unit 102, and a motion capture unit 103, wherein:
信息运算处理单元 101,用于向三维立体显示单元 1ί)2提供三维立体界面 显示信号; An information operation processing unit 101, configured to provide a three-dimensional stereoscopic display unit to the three-dimensional display unit 1 ;
三维立体显示单元 102,用于根据所述三维立体界面显示信号向用户显示 三维立体界面;  The three-dimensional display unit 102 is configured to display a three-dimensional interface to the user according to the three-dimensional interface display signal;
动作捕获单元 103,用于捕获用户浏览该三维立体界面做出的肢体空间移 动信息, 并将所述肢体空间移动信息发送到信息运算处理单元 101;  The action capture unit 103 is configured to capture the body space movement information made by the user browsing the three-dimensional interface, and send the limb space movement information to the information operation processing unit 101;
信息运算处理单元 101,进一步用于确定对应于该用户肢体空间移动信息 的交互操作命令, 并实时向三维立体显示单元 102提供对应于执行该交互操 作命令后的三维立体界面显示信号。  The information operation processing unit 101 is further configured to determine an interactive operation command corresponding to the user's limb space movement information, and provide the three-dimensional stereoscopic display unit 102 with a three-dimensional stereo interface display signal corresponding to the execution of the interactive operation command in real time.
优选地, 该系统进一步包括佩戴于用户头部的视角感知单元 104。  Preferably, the system further includes a perspective perception unit 104 that is worn on the user's head.
视角感知单元 104, 用于感测用户头部运动信息, 并将所述用户头部运动 信息发送到信息运算处理单元 101;  The view sensing unit 104 is configured to sense user head motion information, and send the user head motion information to the information operation processing unit 101;
信息运算处理单元 101,进一步用于根据所述用户头部运动信息确定用户 实时视角, 并实时向三维立体显示单元 102提供基于该用户实时视角下的三 维立体界面显示信号。  The information operation processing unit 101 is further configured to determine a real-time viewing angle of the user according to the user's head motion information, and provide a three-dimensional stereoscopic interface display signal based on the real-time viewing angle of the user to the three-dimensional stereoscopic display unit 102 in real time.
优选地, 该系统进一步包括声音处理单元 105。  Preferably, the system further includes a sound processing unit 105.
声音处理单元 105, 用于捕获用户的语音输入信息, 将所述语音输入信息 发送到信息运算处理单元 101;以及向用户播放由信息运算处理单元 101提供 的语音播放信息;  The sound processing unit 105 is configured to capture voice input information of the user, send the voice input information to the information operation processing unit 101, and play the voice play information provided by the information operation processing unit 101 to the user;
信息运算处理单元 101 ,进一步用于根据所述语音输入信息确定语音操作 命令, 并向三维立体显示单元 102 提供执行该语音操作命令后的三维立体界 面信号; 以及向声音处理单元 :105 提供与所述三维立体界面显示信号相关的 语音播放信息。 The information processing unit 101 is further configured to input information to determine the voice according to the voice operation command to the three-dimensional display unit 102 provides interface signals performs three-dimensional operation of the voice command; and a voice processing unit to: provide 105 The three-dimensional interface displays signal related Voice playback information.
其中, 三维立体显示单元 102 具体可以实施为头戴式设备, 比如优选为 眼镜式三维立体显示设备。 该眼镜式三维立体显示设备通过控制对应于左右 眼的微显示器所显示画面的微小差异使用户双眼看到的画面形成视差, 大脑 会解读双眼的视差并藉以判断物体远近与产生立体视觉。  The three-dimensional display unit 102 can be embodied as a head mounted device, such as a glasses-type three-dimensional display device. The glasses-type three-dimensional display device forms a parallax by the user's eyes by controlling the slight difference in the screen displayed by the micro-displays corresponding to the left and right eyes, and the brain interprets the parallax of both eyes and judges the object's distance and stereoscopic vision.
信息运算处理单元 1 01 可以是任意能够提供三维立体界面显示信号的装 置。 信息运算处理单元 1 Q1 可以来自任意信息获取装置, 比如移动终端、 计 算机, 或者基于云计算的信息服务平台等。  The information operation processing unit 1 01 may be any device capable of providing a three-dimensional interface display signal. The information operation processing unit 1 Q1 can be derived from any information acquisition device such as a mobile terminal, a computer, or a cloud-based information service platform.
信息运算处理单元 1 01 可以通过其内置操作系统处理相应的交互处理命 令完成某种运算 (例如手机拨号, 浏览网页等) , 并通过有线或无线方式实 时更新相应三维立体界面显示信号, 并输出三维立体界面显示信号给三维立 体显示单元 102予以显示。  The information operation processing unit 01 can perform a certain operation (such as mobile phone dialing, browsing webpage, etc.) by processing the corresponding interactive processing command through its built-in operating system, and update the corresponding three-dimensional stereo interface display signal in real time through wired or wireless mode, and output the three-dimensional image. The stereoscopic interface display signal is displayed to the three-dimensional display unit 102.
优选地, 信息运算处理单元 1 01 与三维立体显示单元 1 02之间的通信方 式可以有多种具体实施形式, 包括但是不局限于: 无线宽带传输、 wif i传输、 蓝牙传输、 红外传输、 移动通信传输、 USB传输或者有线传输等等。  Preferably, the communication manner between the information operation processing unit 101 and the three-dimensional stereoscopic display unit 102 can be implemented in various specific forms, including but not limited to: wireless broadband transmission, wif i transmission, Bluetooth transmission, infrared transmission, and mobile Communication transmission, USB transmission or wired transmission, etc.
相应地, 信息运算处理单元 101与动作捕获单元 103、 视角感知单元 1 04 或声音处理单元 1 05 之间的通信方式也可以有多种具体实施形式, 包括但是 不局限于: 无线宽带传输、 wi f i传输、 蓝牙传输, 红外传输、 移动通信传输、 USB传输或者有线传输等等。  Correspondingly, the communication manner between the information operation processing unit 101 and the action capture unit 103, the view perception unit 104 or the sound processing unit 105 may also have various specific implementation forms, including but not limited to: wireless broadband transmission, wi Fi transmission, Bluetooth transmission, infrared transmission, mobile communication transmission, USB transmission or wired transmission, and the like.
声音处理单元 1 05 可以包括阵列式声音采集传感器、 扬声器模块及数据 传输模块。 声音处理单元 1 05 可以采用声音采集传感器捕获用户输入的语音 信息, 并将捕获的用户语音数据信息传输给信息运算处理单元 101, 以由信息 运算处理单元 1 01作出进一步识别处理; 声音处理单元 1 05还用于接收并处 理来自信息运算处理单元 1 01的各种语音信号,以向用户提供各种声音反馈。  The sound processing unit 1 05 may include an array sound collection sensor, a speaker module, and a data transmission module. The sound processing unit 105 may capture the voice information input by the user by using the sound collection sensor, and transmit the captured user voice data information to the information operation processing unit 101 to perform further recognition processing by the information operation processing unit 101; the sound processing unit 1 05 is also used to receive and process various speech signals from the information operation processing unit 101 to provide various audible feedback to the user.
具体地,动作捕获单元 1 03可以包括光学深度传感器以及数据传输模块。 通过光学深度传感器实时获得用户双手或单手的深度图像, 并通过数据传输 模块将这些深度图像实时传输给信息运算处理单元 101,由信息运算处理单元Specifically, the motion capture unit 103 may include an optical depth sensor and a data transmission module. The depth image of the user's hands or one hand is obtained in real time by the optical depth sensor, and these depth images are transmitted to the information operation processing unit 101 in real time through the data transmission module, and the information operation processing unit is
101进行分析处理获得用户手势交互意图信息。 101 performs analysis processing to obtain user gesture interaction intention information.
具体地, 视角感知单元 〗04 可以包括陀螺仪、 加速计、 电子罗盘等微.电 子传感器及数据传输模块。 视角感知单元 104 可以固定佩戴于用户头部, 用 于感测用户头部运动信息, 并将相应的数据信息通过数据传输模块传输给信 息运算处理单元 101,由信息运算处理单元 101进一步分析得到用户实时的视 角方向及变化信息。  Specifically, the viewing angle sensing unit 04 may include a micro electronic sensor such as a gyroscope, an accelerometer, an electronic compass, and a data transmission module. The view sensing unit 104 can be fixedly worn on the user's head for sensing the user's head motion information, and transmitting the corresponding data information to the information operation processing unit 101 through the data transmission module, and further analyzing and obtaining the user by the information operation processing unit 101. Real-time viewing direction and change information.
信息运算处理单元 101实时接受由处理声音处理单元 105、动作捕获单元 103和视角感知单元 104所实时提供的相关数据信息,并实时更新以提供三维 立体界面显示信号给三维立体显示单元 102。  The information operation processing unit 101 receives relevant data information provided in real time by the processing sound processing unit 105, the motion capturing unit 103, and the angle sensing unit 104, and updates in real time to provide a three-dimensional interface display signal to the three-dimensional display unit 102.
信息运算处理单元 101 具备相应的运算能力, 并能够与其他各个单元之 间进行通讯。 信息运算处理单元 101接收并分析处理来自声音处理单元 105、 动作捕获单元 103和视角感知单元 104等单元的数据信息, 实时分析用户交 互意图, 结合该系统专属的三维立体自然交互界面, 利用三维图形生成技术 实时渲染当前用户视角及自然交互操作下实时更新的三维立体虚拟环境, 并 转化为三维立体显示信号实时传输给三维立体显示单元 102。  The information operation processing unit 101 has corresponding computing capabilities and is capable of communicating with other units. The information operation processing unit 101 receives and analyzes the data information from the units such as the sound processing unit 105, the motion capture unit 103, and the view perception unit 104, analyzes the user interaction intention in real time, and combines the three-dimensional natural interaction interface exclusive to the system to utilize the three-dimensional graphics. The generating technology real-time renders the current user perspective and the three-dimensional virtual environment updated in real time under the natural interaction operation, and converts the three-dimensional stereoscopic display signal into the three-dimensional stereoscopic display unit 102 in real time.
在具体实施中, 为了便于适用于各种具体的应用场景, 三维立体显示单 元 102可以是一个便携式头戴式设备; 而视角感知单元 104可以是一个便携 式头戴式设备。  In a specific implementation, in order to facilitate application to various specific application scenarios, the three-dimensional display unit 102 can be a portable head mounted device; and the view sensing unit 104 can be a portable head mounted device.
可选地, 还可以将三维立体显示单元 102与视角感知单元 104集成为一 个便携式头戴式设备。  Optionally, the three-dimensional stereoscopic display unit 102 and the viewing angle sensing unit 104 can also be integrated into a portable head mounted device.
可选地, 可以将三维立体显示单元 102, 视角感知单元 104, 声音处理单 元 105集成为一个便携式头戴式设备或穿戴式设备。  Alternatively, the three-dimensional stereoscopic display unit 102, the viewing angle sensing unit 104, and the sound processing unit 105 may be integrated into one portable head mounted device or wearable device.
可选地, 可以将三维立体显示单元 102, 视角感知单元 104, 手势识别模 块集成为一个便携式头戴式设备或穿戴式设备; Optionally, the three-dimensional stereoscopic display unit 102, the perspective perception unit 104, and the gesture recognition module may be The block is integrated into a portable head mounted device or wearable device;
可选地, 还可以将三维立体显示单元 102, 视角感知单元 104, 动作捕获 单元 1 03 , 声音处理单元 1 05集成为一个便携式头戴式设备或穿戴式设备。  Optionally, the three-dimensional stereoscopic display unit 102, the viewing angle sensing unit 104, the motion capturing unit 103, and the sound processing unit 105 may be integrated into a portable head mounted device or a wearable device.
可选地, 动作捕获单元 〗03 可以是一个便携式穿戴式设备, 或固定于用 户体外可捕获用户动作位置处。  Alternatively, the motion capture unit -03 may be a portable wearable device or fixed at the user's external location to capture the user action location.
动作捕获单元 103 可以佩戴在胸前, 甚至头部 (例如眼镜) , 等等, 从 Ϊ¾便于捕获人体动作。  The motion capture unit 103 can be worn on the chest, even on the head (e.g., glasses), etc., to facilitate capturing human motion from the 。.
在一个实施方式中, 当用户戴上三维立体显示单元 1 02 及视角感知单元 104, 并与信息运算处理单元 101及其他单元相联接后, 用户即可感觉进入了 一个自然交互虛拟三维立体信息空间。  In one embodiment, when the user wears the three-dimensional display unit 102 and the view perception unit 104, and is connected with the information operation processing unit 101 and other units, the user can feel that he has entered a natural interactive virtual three-dimensional information space. .
图 3 为根据本发明实施方式的三维立体虚扭信息自然交互界面示意图; 图 4为根据本发明另一实施方式的三维立体虚拟信息自然交互界面示意图。  3 is a schematic diagram of a three-dimensional virtual virtual information natural interaction interface according to an embodiment of the present invention; FIG. 4 is a schematic diagram of a three-dimensional virtual information natural interaction interface according to another embodiment of the present invention.
该三维立体信息空间可以包含很多用户自定义设定的内容。 例如可以包 括用户的虚拟宠物, 用户喜爱的新闻版面, 用户的邮件等等。 所有的虚拟环 境均由信息运算处理单元 1 01 利用三维图形生成技术实时渲染得到, 以三维 立体动态形式呈现给用户。  The three-dimensional information space can contain a lot of user-defined content. For example, it may include a user's virtual pet, a user's favorite news section, a user's mail, and the like. All virtual environments are rendered in real time by the information processing unit 1 01 using 3D graphics generation technology, and presented to the user in a three-dimensional dynamic form.
当用户做出抬头或转头等变换視角的动作后, 视角感知单元 104 中的传 感器将实时获取处理相关数据信息, 将用户实时的动作状态的量化测量结杲 传输给信息运算处理单元 101 ,由信息运算处理单元 1 01进一步分析处理得到 用户实时的视角信息, 得到的实时视角信息用于实时渲染新的視角状态下的 三维虚拟环境, 使用户做出抬头或转头等变换视角的动作时所看到的虚拟环 境能够实时根据周户的视角变换做出相应的调整, 从而让用户感觉到其在虚 拟空间中能够自然的通过抬头转头等动作变换视角观看该虚拟空间相应视角 方向的内容, 另其在虚拟空间中的抬头转头的感受尽量接近于其在真实空间 中做出相应动作的感受。 用户还可以通过语音对其看到的三维立体虚拟空间环境进行交互。 例如 当用户说 "菜单" , 该语音信号被声音处理单元 1 05 的阵列麦克风采集分析 为数据信号, 并传输给信息运算处理单元 101 进行分析, 经过统计匹配分析 算法分析得出用户的交互意图为打开交互菜单, 这时信息运算处理单元 1 01 将控制三维立体虚扭空间交互界面拭行该交互命令, 用户视野中将出现一个 交互菜单。 After the user performs an action of changing the angle of view such as looking up or turning, the sensor in the view sensing unit 104 acquires the processing related data information in real time, and transmits the quantized measurement result of the user's real-time action state to the information operation processing unit 101. The information operation processing unit 101 further analyzes and processes the real-time viewing angle information of the user, and the obtained real-time viewing angle information is used for real-time rendering of the three-dimensional virtual environment in a new viewing angle state, so that the user can perform the action of changing the viewing angle such as looking up or turning the head. The virtual environment that is seen can be adjusted in real time according to the perspective change of the household, so that the user can feel that in the virtual space, the content of the corresponding viewing direction of the virtual space can be naturally viewed through the action of changing the head, such as a head-turning head. In addition, the feeling of head-turning in the virtual space is as close as possible to the feeling of making corresponding actions in real space. Users can also interact with the three-dimensional virtual space environment they see through voice. For example, when the user says "menu", the voice signal is collected and analyzed by the array microphone of the sound processing unit 105 as a data signal, and transmitted to the information operation processing unit 101 for analysis. The statistical matching analysis algorithm analyzes that the user's interaction intention is The interactive menu is opened, and the information operation processing unit 101 will control the three-dimensional virtual twisted space interaction interface to wipe the interactive command, and an interactive menu will appear in the user's field of view.
当用户的手进入动作捕获单元 1 03 的探测范围后, 动作捕获单元 1 03将 实时捕获并传送用户手的深度图像序列给信息运算处理单元 1 01,信息运算处 理单元 101 通过一系列软件算法实时分析周户手的深度图像序列得到用户手 的移动轨迹, 进¾分析判断用户的手势交互意图。  After the user's hand enters the detection range of the motion capture unit 103, the motion capture unit 103 captures and transmits the depth image sequence of the user's hand to the information operation processing unit 101 in real time, and the information operation processing unit 101 performs real-time through a series of software algorithms. The depth image sequence of the weekly hand is analyzed to obtain the movement trajectory of the user's hand, and the user's gesture interaction intention is determined by the analysis.
同时, 用户视野中会有相应的虛拟指针对应于用户的手的运动和位置, 给用户以相应的手势交互反馈。  At the same time, the corresponding virtual pointer in the user's field of view corresponds to the motion and position of the user's hand, and the user is fed back with corresponding gestures.
示范性地, 下面以用户抬起手、 移动、 做点击动作为例说明系统工作流 程。  Illustratively, the following is a description of the system workflow by taking the user's hand up, moving, and clicking actions as an example.
当用户抬起手进入动作捕获单元 103 的探测范围后, 动作捕获单元 103 实时获取当前视场的深度图像序列并将其传输给信息运算处理单元 1 01 ,信息 运算处理单元 101 中的相应匹配识别算法实时分析接收到的深度图像序列, 分析探测出用户的手后将其识別为有效特征跟踪对象, 利用其深度图像数据 实时分析获取其三维位置信息, 这时用户视野中的三维立体虚拟空间环境中 将出现一个虚拟指针与用户的手对应 , 用户就像控制自己的手一样控制该虚 拟指针的移动。  After the user raises the hand into the detection range of the motion capture unit 103, the motion capture unit 103 acquires the depth image sequence of the current field of view in real time and transmits it to the information operation processing unit 101, and the corresponding match recognition in the information operation processing unit 101. The algorithm analyzes the received depth image sequence in real time, analyzes the user's hand and recognizes it as an effective feature tracking object, and uses its depth image data to analyze its three-dimensional position information in real time. At this time, the three-dimensional virtual space in the user's field of view A virtual pointer will appear in the environment corresponding to the user's hand, and the user controls the movement of the virtual pointer just like controlling his own hand.
优选地, 用户视野左上方有一个虚拟按钮, 用户用手控制该虚拟指针移 动到该虚拟按钮处, 这时虛拟指针形态发生变化, 提示用户该位置可以进行 点击操作。 用户这时用手做一个点击动作或手的形态由只伸出食指状态变为 握拳状态, 信息运算处理单元 1 01通过分析来自动作捕获单元 1 03 的用户手 的深度图像序列分析出用户手的运动轨迹以及形态变化, 并通过一系列冗余 动作匹配算法分析判断得出用户的交互意图为点击、 确认搡作, 这时信息运 算处理单元 101 则控制该三维立体虚拟交互空间界面执行该交互意图, 并将 执行后该交互意图后的交互结果实时渲染呈现给用户。 Preferably, there is a virtual button at the upper left of the user's field of view, and the user controls the virtual pointer to move to the virtual button by hand. At this time, the virtual pointer shape changes, prompting the user that the position can be clicked. At this time, the user makes a click action or the form of the hand changes from the state of extending only the index finger to the state of the fist, and the information operation processing unit 101 analyzes the user's hand from the action capture unit 103. The depth image sequence analyzes the motion trajectory and morphological changes of the user's hand, and analyzes and judges that the user's interaction intention is click and confirm through a series of redundant motion matching algorithms. At this time, the information operation processing unit 101 controls the three-dimensional image. The three-dimensional virtual interaction space interface performs the interaction intention, and presents the interaction result after the execution of the interaction intention to the user in real time.
本发明实施方式中, 针对现有技术下各种电子设备 (诸如便携式电子设 备) 局限为采用物理触摸屏幕或键盘等作为交互手段的缺陷, 由信息运算处 理单元 101 向三维立体显示单元 102提供三维立体界面显示信号, 并通过识 别用户针对该三维立体界面做出的用户肢体空间移动信息实现交互。  In the embodiment of the present invention, various electronic devices (such as portable electronic devices) in the prior art are limited to the disadvantage of using a physical touch screen or a keyboard as an interactive means, and the three-dimensional display unit 102 is provided with three-dimensional information by the information operation processing unit 101. The stereoscopic interface displays signals and interacts by recognizing the user's limb space movement information made by the user for the three-dimensional interface.
而且, 本发明实施方式同时针对前述三维立体界面提出一种基于对人肢 体动作 (优选为人的手势) 识別的人性化交互方案, 此交互方案能够无缝融 合前述三维立体界面与人体的肢体动作操控信息。 通过对一些基本的、 典型 的操作识别过程进行优化处理, 形成一个稳定的交互开发平台, 供开发者开 发各式各样应用。  Moreover, the embodiment of the present invention simultaneously proposes a humanized interaction scheme based on recognition of a human limb motion (preferably a human gesture) for the aforementioned three-dimensional stereo interface, and the interaction scheme can seamlessly integrate the aforementioned three-dimensional interface and the body motion of the human body. Manipulate the information. By optimizing some basic and typical operational recognition processes, a stable interactive development platform is developed for developers to develop a wide range of applications.
还有, 本发明实施方式提供了一种精确的交互解决方案。 通过该交互方 案, 用户可以通过符合人体自然交互方式的触摸操作对任意可交互界面进行 交互 (Still further, embodiments of the present invention provide an accurate interactive solution. Through the interaction scheme, the user can interact with any interactive interface through a touch operation conforming to the natural interaction mode of the human body (
在一个实施方式中, 动作捕获单元 1 0 3 犹选为红外深度摄像传感装置, 此时用户肢体空间移动信息为通过该红外深度摄像传感装置捕获的包含景深 信息的图像信号。  In one embodiment, the motion capture unit 1 0 3 is selected as an infrared depth imaging sensor device, and the user limb space movement information is an image signal containing depth of field information captured by the infrared depth imaging sensor device.
信息运算处理单元 101通过接收并分析来自动作捕获单元 1 03的用户肢 体空闾移动的景深图像信息, 通过软件算法分析理解出用户交互意图 (即交 互操作命令) , 并实时向三维立体显示单元提供对应于执行该交互操作命令 后的三维立体界面显示信号。  The information operation processing unit 101 receives and analyzes the depth image information of the user's limb movement from the motion capture unit 103, and analyzes the user interaction intention (ie, the interactive operation command) through software algorithm analysis, and provides the three-dimensional display unit in real time. Corresponding to the three-dimensional interface display signal after the execution of the interactive operation command.
在一个实施方式中, 信息运算处理单元 101 首先根椐接收到的实时图像 信息分析得到用户肢体的实时位置信息。并存储一定时间长度的用户肢体(比 如手) 的历史位置信息供进一步的用户交互意图判断。 所进一步识别的用户 交互意图包括用户单手或双手的简单移动操作 (默认) , 单手或双手拖拽操 作或单手或双手点击、 停留、 摆动操作, 等等。 In one embodiment, the information operation processing unit 101 firstly analyzes the real-time position information of the user's limb based on the received real-time image information. And store the user's limb for a certain length of time (than Historical location information such as the hand is for further user interaction intent judgment. The further identified user interaction intent includes a simple moving operation of the user with one or both hands (default), a one-hand or two-handed drag operation or a one- or two-handed click, a stay, a swing operation, and the like.
在一个实施方式中, 信息运算处理单元 101 , 进一步用于在所述三维立体界 面上显示对应于用户手的空间虚拟指针元素; 动作捕获单元 103 , 用于实时捕获 用户响应于浏览该三维立体界面而做出的用户手位置形态信息;  In one embodiment, the information operation processing unit 101 is further configured to display a spatial virtual pointer element corresponding to the user's hand on the three-dimensional interface; the action capturing unit 103 is configured to capture the user in response to browsing the three-dimensional interface in real time. And the user's hand position form information is made;
信息运算处理单元 101 , 用于根据所述用户手位置形态信息, 确定对应于该 用户手位置形态信息的交互操作命令 , 并实时地输出空间虚拟指针元素的图像信 号, 认而实现三维立体界面上的空间虛拟指针元素的运动轨迹与周户手运动轨迹 保持一致, 并实时向用户提供执行该对应于用户手位置形态信息的交互操作命令 后的三维立体界面显示信号。  The information operation processing unit 101 is configured to determine an interaction operation command corresponding to the user hand position pattern information according to the user hand position pattern information, and output an image signal of the space virtual pointer element in real time, thereby realizing the three-dimensional interface The motion trajectory of the space virtual pointer element is consistent with the trajectory of the weekly hand movement, and provides the user with a three-dimensional interface display signal after performing the interactive operation command corresponding to the position information of the user's hand in real time.
比如, 假如用户的手从右向左划过动作捕获单元 103视场区域, 动作捕获单 元 1 03实时记录并发送图像数据给信息运算处理单元 1 01。 信息运算处理单元 ί 01通过一系列软件算法从图像数据中分析得出.用户手势轨迹为从右向左划动, 再通过软件算法确定为某种交互命令(例如: 返回上一页), 进 处理此命令数 据流并给出反馈到用户。  For example, if the user's hand swipe from right to left across the field of view of the motion capture unit 103, the action capture unit 103 records and transmits the image data to the information operation processing unit 101 in real time. The information operation processing unit ί 01 is analyzed from the image data through a series of software algorithms. The user gesture track is swiped from right to left, and then determined by the software algorithm as an interactive command (for example: return to the previous page), Process this command data stream and give feedback to the user.
在实际交互过程中, 信息运算处理单元 〗(U 可识别出一系列的交互命令。 比如: "开始交互 /确定 /选择 /点击 ',, "移动 (上 下、 左、 右 前、 后)", "放 大" , "缩小", "旋转 ',, "退出 /结束交互" 等的手势动作, 实时转化为交互操作 命令并相应执行处理, 并且进而向用户输出相应的交互后显示状态。  In the actual interaction process, the information operation processing unit (U can recognize a series of interactive commands. For example: "Start interaction / OK / select / click",, "Move (up and down, left, right front, back)", " The gesture actions of zooming in, "zooming out", "rotating", "exiting/ending the interaction", etc., are converted into interactive operation commands in real time and correspondingly executed, and then the corresponding interactive display state is output to the user.
下面描述一个示范性的完整交互过程来更好地阐述本发明实施方式。  An exemplary complete interaction process is described below to better illustrate embodiments of the present invention.
假如^户的手从右向左划过动作捕获单元 103视场区域, 而且预先设定 "用户的手从右向左"这一肢体动作对应于 "送回上一页"的交互操作命令。  If the hand of the household is swiped from the right to the left across the field of view of the motion capture unit 103, and the "user's hand from right to left" is preset, the limb action corresponds to the "return to the previous page" interactive command.
(可以在信息运算处理单元 1 01 中预先保存肢体动作和交互操作命令的对应 关系) 首先, 动作捕获单元 1 03 实时记录并发送图像数据给信息运算处理单元 1 01。 信息运算处理单元 101通过一系列软件算法从图像数据中分析得出用户 手势轨迹为从右向左划动, 再通过软件算法确定该手势对应的是 "返回上一 页" 的命令, 进而执行 "返回上一页" 的命令处理, 并且输出执行完 "返回 上一页" 之后的显示^ 态。 (The correspondence between the limb motion and the interactive operation command can be saved in advance in the information operation processing unit 101) First, the action capturing unit 103 records and transmits image data to the information operation processing unit 101 in real time. The information operation processing unit 101 analyzes from the image data through a series of software algorithms to draw the user gesture track from right to left, and then determines by the software algorithm that the gesture corresponds to the command of "returning to the previous page", and then executes " Returns the command processing of the previous page, and outputs the display state after the "return to the previous page" is executed.
优选地, 信息运算处理单元 1 01 具备自学习能力以及一定的用户自定义 扩展搡作功能, 用户可以按照自身的手势习惯训练提高系统的手势识别能力 并可以根据用户自身喜好自定义各种操作的手势以及搡作方式。 用户交互识 別软件中预设了很多参量, 倒如人的肤色信息, 手臂的长度信息等等, 初始 情况下这些参量初始值基于统计平均以尽量满足大多数用户, 通过软件算法 中实现系统的自学习能力, 也就是随着用户不断使用, 软件能够根据用户自 身特点修正其中一些参量使识別交互更倾向于针对特定用户特点, 进¾提高 系统的手势识別能力。  Preferably, the information operation processing unit 101 has self-learning capability and a certain user-defined extension function, and the user can improve the gesture recognition capability of the system according to his own gesture habits and can customize various operations according to the user's own preferences. Gestures and ways of doing things. The user interaction recognition software presets a lot of parameters, such as human skin color information, arm length information, etc. In the initial case, the initial values of these parameters are based on statistical averages to satisfy most users, and the system is implemented by software algorithms. Self-learning ability, that is, as the user continues to use, the software can modify some of the parameters according to the user's own characteristics to make the recognition interaction more inclined to target specific user characteristics, and improve the gesture recognition ability of the system.
此外, 用户识别交互軟件还应提供用户自定义搡作接口 , 比如用户喜爱 的特定手势轨迹代表某种用户自定义的操作命令, 从而实现系统的个性化可 定制特点。  In addition, the user identification interaction software should also provide a user-defined interface, such as a user-specific gesture track representing a user-defined operation command, thereby realizing the system's personalized customizable features.
更具体地, 用户对三维立体界面的交互操作分为两类: 一类是识別非精 确定位操作, 比如 "翻页" , "前进', , "后退" 等命令。 另一类是实现精 确定位操作 , 比如点击交互界面中的按钮或选择一个特定区域等操作。  More specifically, the user's interaction with the three-dimensional interface is divided into two categories: one is to identify inexact positioning operations, such as "page turning", "forward", "back", etc. Another type is to achieve precision. Positioning operations, such as clicking a button in the interactive interface or selecting a specific area.
精确定位操作可以包括: 用户手移动控制空间虚拟指针元素在三维立体 界面中的三维方向上自由移动; 识别出用户手的两种不同状态以及状态变化 时对应手的空间虚拟 4旨针元素在该三维立体界面中的位置, 其中所述手的两 种状态包括握拳状态与只伸出食指的状态; 点击三维立体界面的按钮; 或选 择三维立体界面上的特定区域。  The precise positioning operation may include: a user hand movement control space virtual pointer element freely moving in a three-dimensional direction in the three-dimensional stereo interface; recognizing two different states of the user hand and a space virtual virtual 4 pin element corresponding to the hand when the state changes The position in the three-dimensional interface, wherein the two states of the hand include a state of clenching and a state of extending only the index finger; clicking a button of the three-dimensional interface; or selecting a specific area on the three-dimensional interface.
对于非精确定位操作的识别, 只需要记录分析手的移动轨迹信息即可。 比如, 非精确定位操作可以包括: 例如手从右向左划动、 手从左向右划动、 手从上到下划动以及、 手从下到上划动或, 以及两手分开、 聚拢等。 For the identification of inexact positioning operations, it is only necessary to record the movement track information of the analysis hand. For example, the inaccurate positioning operation may include, for example, a hand swiping from right to left, a hand swiping from left to right, a hand swiping from top to bottom, and a hand swiping from bottom to top, and separating, gathering, etc. .
为了实现精确操作的识別, 需要实时跟踪用户手的运动轨迹并对应于交 互界面上的指针元素以确定用户在交互界面上的欲精确交互元素位置, 由信 息运算处理单元 1 01 分析确定周户手部轨迹意图得出交互命令, 从¾实现对 界面的精确 作。  In order to realize the recognition of the precise operation, it is required to track the movement track of the user's hand in real time and correspond to the pointer element on the interactive interface to determine the position of the user to accurately interact with the element on the interactive interface, and the information operation processing unit 101 analyzes and determines the household. The hand trajectory is intended to derive interactive commands, from 3⁄4 to achieve accurate interface.
上述过程中, 可以预先设定用户的手势与各个具体交互操作命令的对应 关系。 而且, 这种对应关系优选是可以编辑的, 从而可以方使增加新出现的 交互操作命令, 或者基于用户习惯更改对应于交互操作命令的手势。  In the above process, the correspondence between the user's gesture and each specific interactive operation command may be preset. Moreover, such correspondence is preferably editable so that new interactivity commands can be added, or gestures corresponding to interworking commands can be changed based on user habits.
再比如, 下面以识別单手点击的用户交互意图来说明本发明的技术方案。 首先, 用户单手 (比如右手) 抬起进入动作捕捉单元 1 03 的信号采集捕 获范围。 用户按照自己的习惯进行了一次向前的点击动作, 假设整个点击动 作用时 0. 5秒, 动作捕捉单元 1 03将采集到的用户手的移动的图像信息实时 传递给信息运算处理单元 1 01 ,信息运算处理单元 1 01接受实时传来的图像信 息数据, 并存储了一定时间段的历史图像信息数据, 假设存储的历史信息数 据时长为 l s。 动作捕获单元 103中的软件实时对过去一秒中的用户手的图像 信息数据进行分析, 得出在最近过去的一秒中, 用户手的空间位移信息。 经 过逻辑算法判定出前 0. 5秒用户的手的移动轨迹符合简单的移动, 后 0. 5秒 用户手的整体移动轨迹代表用户做出点击动作的概率足够高 (即该概率值符 合某一预设临界判据) 从而认定为一次点击操作。 因此在这一时刻, 信息运 算处理单元 1 01分析得到了一次点击交互, 用户在过去一秒中前 0. 5秒为真 实移动操作, 而在 G . 5 秒开始做了一次点击动作。 将此分析得到的点击操作 交互意图经过编译即时通过通信模块传输给显示信号源。 值得注意的是, 在 此刻之前的 0. 5 秒内, 用户手的位置被识别为默认的移动操作。 因此交互界 面上对应用户手的指针在相应不断更新着位置。 当用户初次使用此交互方案时, 优选经过特定的初始校正设定流程以使 系统软件参数符合此用户的交互习惯。 该初始校正设定流程可以包括: For another example, the technical solution of the present invention will be described below with the purpose of recognizing the user interaction with one-handed click. First, the user raises the signal acquisition capture range into the motion capture unit 103 with one hand (such as the right hand). The user performs a forward click action according to his own habits, and assumes that the entire click action takes 0.5 seconds, and the motion capture unit 103 transmits the collected image information of the user's hand to the information operation processing unit 101 in real time. The information operation processing unit 101 accepts the image information data transmitted in real time, and stores the history image information data for a certain period of time, assuming that the stored history information data has a duration of ls. The software in the motion capture unit 103 analyzes the image information data of the user's hand in the past one second in real time, and obtains the spatial displacement information of the user's hand in the last one second. The logical algorithm determines that the movement trajectory of the user's hand is in accordance with the simple movement, and the total movement trajectory of the user's hand after the 0.5 second is representative that the probability of the user making the click action is sufficiently high (ie, the probability value conforms to a certain pre- A critical criterion is set) to be recognized as a one-click operation. Therefore, at this moment, the information operation processing unit 101 analyzes and obtains a click interaction, and the user has performed a click operation for 0.5 seconds in the past one second, and a click operation at G. 5 seconds. The click operation interaction intention obtained by this analysis is compiled and transmitted to the display signal source through the communication module. It is worth noting that the position of the user's hand is recognized as the default movement operation within 0.5 seconds before this moment. Therefore, the pointer corresponding to the user's hand on the interactive interface is constantly updated in position. When the user first uses this interaction scheme, it is preferred to go through a specific initial correction setting process to make the system software parameters conform to the interaction habit of the user. The initial calibration setting process can include:
首先通过三维交互显示界面指示用户伸出双手进入动作捕获单元探测区 域, 对用户双手进行图像采祥识别, 建立识别用户手的相关形状参数。 然后 通过交互显示界面指示用户定义手在交互操作过程中的空间范围, 例如分别 指示用户将手放在空间平面角点以及前后两点, 通过图像采样分析后确定用 户手进行交互操作的空间范围相关的参数值。  Firstly, the user is instructed to extend the two hands into the detection area of the motion capture unit through the three-dimensional interactive display interface, and the image recognition of the user's hands is performed, and the relevant shape parameters for identifying the user's hand are established. Then, through the interactive display interface, the user defines the spatial extent of the hand during the interaction operation, for example, respectively, indicating that the user places the hand on the spatial plane corner point and the two points before and after, and determines the spatial extent of the user's hand interaction operation through image sampling analysis. The parameter value.
然后, 信息运算处理单元 1 01通过分析动作捕获单元 1 03传来的校正设 定过程用户手在各个点的相对位置信息以确定识别交互算法中尺度相关关键 参数, 并指示用户进行几次单手或双手的点击操作, 拖拽搡作从中提取相应 交互意图判据相关的关键参数信息。 至此初始校正设定流程结束, 并保存为 可调用信息文件进行存储。 用户以后可以直接调用相应档案即可。  Then, the information operation processing unit 101 determines the relative position information of the user hand at each point by the correction setting process transmitted by the action capturing unit 103 to determine the key related parameter in the recognition interaction algorithm, and instructs the user to perform one-handed several times. Or the click operation of the two hands, drag and drop to extract the key parameter information related to the corresponding interaction intention criterion. At this point, the initial calibration setting process ends and is saved as a callable information file for storage. The user can directly call the corresponding file in the future.
通过初始校正设定流程以确定识别交互算法中关键参数, 从而使本交互 方案能够良好的满足任意用户的交互习惯, 为不同用户提供个性化的准确交 互操作体验。  Through the initial calibration setting process to determine the key parameters in the recognition interaction algorithm, the interaction scheme can satisfactorily satisfy the interaction habits of any user, and provide a personalized and accurate interactive operation experience for different users.
基于上述分析, 本发明实施方式还提出了一种用户交互方法„  Based on the above analysis, the embodiment of the present invention also proposes a user interaction method „
图 2为根据本发明实施方式的; ¾户交互方法流程示意图。  2 is a schematic flow chart of a method for interacting with a household according to an embodiment of the present invention.
如图 1所示 , 该方法包舌:  As shown in Figure 1, the method is wrapped:
步骤 201 : 提供三维立体界面显示信号。  Step 201: Provide a three-dimensional interface display signal.
步骤 202 : 根据所述三维立体界面显示信号向用户显示三维立体界面。 步驟 203: 确捕获用户浏览该三维立体界面做出的肢体空间移动信息。 步驟 204 : 确定对应于该用户肢体空间移动信息的交互操作命令, 并实时 提供对应于执行该交互操作命令后的三维立体界面显示信号。  Step 202: Display a three-dimensional interface according to the three-dimensional interface display signal to the user. Step 203: Accomplish the movement information of the limb space made by the user browsing the three-dimensional interface. Step 204: Determine an interaction operation command corresponding to the user's limb space movement information, and provide a three-dimensional interface display signal corresponding to the execution of the interaction operation command in real time.
在一个实施方式中, 捕获用户浏览该三维立体界面做出的肢体空间移动 信息具体为: 捕获用户在所述三维立体界面上的精确定位操作和 z或非精确定 位操作。 其中, 精确定位操作可以包括: 点击三维立体界面上的按钮或选择 交互界面上的特定区域,而非精确定位操作具体可以包括:手从右向左划动、 手从左向右划动、 手从上到下划动、 手从下到上划动或两手分开、 聚拢, 以 及其他一些特定规律的手势轨迹等。 In one embodiment, capturing the movement information of the limb space made by the user browsing the three-dimensional interface is specifically: capturing a precise positioning operation and z or non-precision determination of the user on the three-dimensional interface Bit operation. The precise positioning operation may include: clicking a button on the three-dimensional interface or selecting a specific area on the interaction interface, and the non-precise positioning operation may specifically include: a hand swiping from right to left, a hand swiping from left to right, a hand Swiping from top to bottom, hand swiping from bottom to top, or two hands apart, gathering, and other specific regular gesture trajectories.
优选地, 该方法进一步包括预先获取用户交互习惯的初始校正设定步驟。 包括:  Preferably, the method further comprises an initial correction setting step of acquiring user interaction habits in advance. Includes:
首先通过三维立体界面指示用户伸出汉手进入动作捕获单元探测区域, 对用户双手进行图像采样识别, 建立识别用户手的相关形状参数。 然后通过 三维立体界面指示用户定义手在交互操作过程中的空间范围, 例如分別指示 用户将手放在空间的各个角点 (左上角, 右上角, 左下角, 右下角, 等等) , 以及前后两点, 通过图像采祥分析后确定用户手进行交互操作的空间范围相 然后, 信息运算处理单元通过分析动作捕获单元传来的校正设定过程用 户手在各个点的相对位置信息以确定识别交互算法中尺度相关关键参数, 并 指示用户进行凡次单手或双手的点击操作, 拖拽操作从中提取相应交互意图 判据相关的关键参数信息。 至此初始校正设定流程结束, 并保存为可调用信 息文件进行存储。 用户以后可以直接调用相应档案即可。  Firstly, the user is instructed to extend the Han hand into the detection area of the motion capture unit through the three-dimensional interface, and the image is sampled and recognized by the user's hands, and the relevant shape parameters for identifying the user's hand are established. Then, through a three-dimensional interface, the user defines the spatial extent of the hand during the interaction, for example, respectively indicating that the user places his hand in each corner of the space (upper left corner, upper right corner, lower left corner, lower right corner, etc.), and before and after At two points, the spatial range phase of the interactive operation of the user hand is determined by image analysis, and then the information operation processing unit determines the relative position information of the user hand at each point by analyzing the correction setting process transmitted by the motion capture unit to determine the recognition interaction. The algorithm measures the key parameters related to the scale, and instructs the user to perform the single-hand or two-hand click operation, and the drag operation extracts the key parameter information related to the corresponding interaction intention criterion. At this point, the initial calibration setting process ends and is saved as a callable information file for storage. The user can directly call the corresponding file in the future.
在一个实施方式中, 该方法进一步包括:  In one embodiment, the method further comprises:
感测用户头部运动信息;  Sensing user head motion information;
根据所述用户头部运动信息确定用户实时视角;  Determining a real-time perspective of the user according to the user's head motion information;
实时提供基于该用户实时视角下的三维立体界面显示信号。  The three-dimensional interface display signal based on the real-time view of the user is provided in real time.
在另一个实施方式中, 该方法进一步包括:  In another embodiment, the method further comprises:
捕获用户的语音输入信息;  Capturing the user's voice input information;
根据所述.语音输入信息确定语音搡作命令;  Determining a voice command according to the voice input information;
提供执行该语音搡作命令后的三维立体界面信号。 综上所述, 在本发明实施方式中, 提出了一种新颖的周户交互装置和方 法。 在本发明实施方式中, 信息运算处理单元向三维立体显示单元提供三维 立体界面显示信号; 三维立体显示单元根据所述三维立体界面显示信号向用 户显示三维立体界面; 动作捕获单元捕获用户浏览该三维立体界面做出的肢 体空间移动信息, 并将所述肢体空间移动信息发送到信息运算处理单元。 A three-dimensional interface signal is provided after the voice command is executed. In summary, in the embodiments of the present invention, a novel weekly interaction device and method are proposed. In the embodiment of the present invention, the information operation processing unit provides a three-dimensional stereoscopic interface display signal to the three-dimensional stereoscopic display unit; the three-dimensional stereoscopic display unit displays a three-dimensional stereoscopic interface to the user according to the three-dimensional stereoscopic interface display signal; and the motion capture unit captures the user browsing the three-dimensional interface. The limb space movement information made by the stereoscopic interface transmits the limb space movement information to the information operation processing unit.
由此可见, 应用本发明实施方式以后, 用户可以 浸在一个私密和趣味 性的虚拟信息交互空间中, 并在此空间中进行自然的信息交互。  It can be seen that after applying the embodiment of the present invention, the user can be immersed in a private and interesting virtual information interaction space, and natural information interaction is performed in the space.
基于本发明实施方式开发的很多产品将会是具有市场竟争力的消费电子 产品。 本发明实施方式独特的自然交互解决方案将促进消費级虛拟现实与增 强现实相关领域产品与应用的发展, 并极大地提高用户的交互体验, 并且能 够催生出一系列有意义的应用, 从而极大地增强了用户体验。  Many of the products developed based on embodiments of the present invention will be consumer electronics products with market competitiveness. The unique natural interaction solution of the embodiment of the present invention will promote the development of products and applications in the field of consumer-level virtual reality and augmented reality, and greatly enhance the user's interactive experience, and can generate a series of meaningful applications, thereby greatly Enhanced user experience.
而且, 本发明实施方式提出了一个集成了语音交互、 手势交互、 自然视 角变换等方式的自然交互解决方案。 通过该解决方案用户可对三维立体虛拟 信息自然交互空间界面中的元素进行自然的交互, 以获取信息或进行娱乐, 打造一种沉浸的、 独特的、 有吸引力的用户体验。  Moreover, the embodiment of the present invention proposes a natural interaction solution integrating voice interaction, gesture interaction, natural angle transformation and the like. With this solution, users can naturally interact with elements in the 3D virtual information natural interaction space interface to obtain information or entertain, creating an immersive, unique and attractive user experience.
另外, 本发明实施方式提出了一种自然交互技术的三维立体虚拟信息自 然交互界面, 该交互界面包含众多三维立体可进行自然交互的元素。 在交互 过程中, 该界面被设计为通过声音, 交互元素的光影变化等方式实时给用户 提供交互反馈, 增强用户的自然交互乐趣和体验。  In addition, an embodiment of the present invention provides a three-dimensional virtual information natural interaction interface of a natural interaction technology, and the interaction interface includes a plurality of three-dimensional solid elements that can perform natural interaction. During the interaction process, the interface is designed to provide interactive feedback to the user in real time through sound, light and shadow changes of interactive elements, and enhance the user's natural interaction fun and experience.
通过本发明实施方式所提出的解决方案, 用户可以自然地用手控制上迷 三维立体虚拟信息自然交互界面中对应于用户手的虚拟指针, 对三维立体虛 拟信息自然交互界面进行自然交互。  With the solution proposed by the embodiment of the present invention, the user can naturally control the virtual pointer corresponding to the user's hand in the natural interactive interface of the three-dimensional virtual information, and naturally interact with the natural interactive interface of the three-dimensional virtual information.
还有 , 本发明实施方式可用于任意的显示设备以及交互界面 , 在交互界 面上加入与用户手实时对应指针的方案可以方便用户进行一系列精确触摸交 互操作。 而且, 这种交互方式非常自然, 符合人性的基本手势交互模式, 而且降 低了用户对搡作设备的学习成本。 这种交互方式符合人体自然地交互操控与 便携信息处理硬件设备的分体设计, 使人能够更集中精力于其所关注的信息In addition, the embodiments of the present invention can be applied to any display device and an interactive interface. The solution of adding a pointer corresponding to the user's hand in real time on the interactive interface can facilitate the user to perform a series of precise touch interaction operations. Moreover, this kind of interaction is very natural, conforms to the basic gesture interaction mode of human nature, and reduces the learning cost of the user's equipment. This interaction is in line with the natural interaction between the human body and the portable information processing hardware device, enabling people to concentrate more on the information they care about.
!¾不是硬件设备本身„ !3⁄4 is not the hardware device itself „
不仅于此, 本发明实施方式可以应用与任何人机交互信息设备, 其通用 性将给人们带来极大便利。  Not only that, the embodiment of the present invention can be applied to any human-machine interaction information device, and its versatility will bring great convenience to people.
以上所述, 仅为本发明实施方式的较佳实施例而已, 并非用于限定本发 明实施方式的保护范围。 凡在本发明实施方式的精神和原則之内, 所作的任 何修改、 等同替换、 改进等, 均应包含在本发明实施方式的保护范围之内。  The above is only a preferred embodiment of the embodiments of the present invention, and is not intended to limit the scope of the embodiments of the present invention. All modifications, equivalents, improvements, etc., made within the spirit and scope of the embodiments of the present invention are intended to be included within the scope of the present invention.

Claims

K 一种用户交互系统, 其特征在于, 该系统包括信息运算处理单元 三维 立体显示单元和动作捕获单元, 其中: K A user interaction system, characterized in that the system comprises an information operation processing unit three-dimensional display unit and a motion capture unit, wherein:
信息运算处理单元, 用于向三维立体显示单元提供三维立体界面显示信号; 三维立体显示单元, 用于根据所述三维立体界面显示信号向用户显示三维立 体界面;  An information operation processing unit, configured to provide a three-dimensional stereoscopic display signal to the three-dimensional stereoscopic display unit; and a three-dimensional stereoscopic display unit, configured to display a three-dimensional stereoscopic interface to the user according to the three-dimensional stereoscopic interface display signal;
动作捕获单元, 用于捕获用户浏览该三维立体界面做出的肢体空间移动信息, 并将所述啟体空间移动信息发送到信息运算处理单元;  a motion capture unit, configured to capture a body space movement information made by the user browsing the three-dimensional interface, and send the start space movement information to an information operation processing unit;
信息运算处理单元, 进一步用于确定对应于该用户肢体空间移动信息的交互 操作命令, 并实时向三维立体显示单元提供对应于执行该交互搡作命令后的三维 立 界面显示信—号'„  The information operation processing unit is further configured to determine an interaction operation command corresponding to the movement information of the user's limb space, and provide a three-dimensional vertical interface display signal corresponding to the execution of the interaction command to the three-dimensional display unit in real time.
2、 根据权利要求 1 所述的用户交互系统, 其特征在于, 所述信息运算处理 单元为移动终端、 计算机或基于云计算的信息服务平台。  2. The user interaction system according to claim 1, wherein the information operation processing unit is a mobile terminal, a computer or a cloud computing-based information service platform.
3、 根据权利要求 1 所述的用户交互系统, 其特征在于, 该系统进一步包括 佩戴于用户头部的视角感知单元;  3. The user interaction system according to claim 1, wherein the system further comprises a perspective sensing unit worn on a user's head;
视角感知单元, 用于感测用户头部运动信息, 并将所述用户头部运动信息发 送到信息运算处理单元;  a view sensing unit, configured to sense user head motion information, and send the user head motion information to an information operation processing unit;
信息运算处理单元, 进一步用于根据所述用户头部运动信息确定用户实时视 角 ,并实时向三维立体显示单元提供基于该用户实时视角下的三维立体界面显示 信号。  The information operation processing unit is further configured to determine a real-time viewing angle of the user according to the user head motion information, and provide a three-dimensional stereoscopic display unit based on the real-time viewing angle of the user in real time to the three-dimensional stereoscopic display unit.
4、 根据权利要求 3 所述的用户交互系统, 其特征在于, 该系统进一步包括 声音处理单元;  4. The user interaction system of claim 3, wherein the system further comprises a sound processing unit;
声音处理单元, 用于捕获用户的语音输入信息, 将所述-语音输入信息发送到 信息运算处理单元; 以及向用户播放由信息运算处理单元提供的语音播放信息; 信息运算处理单元, 进一步用于根据所述-语音输入信息确定语音操作命令, 并向三维立体显示单元提供执行该语音操作命令 '后的三维立体界面信号; 以及向 声音处理单元提供与所述三维立体界面显示信号相关的语音播放信息。 a sound processing unit, configured to capture voice input information of the user, send the voice input information to the information operation processing unit; and play the voice play information provided by the information operation processing unit to the user; the information operation processing unit is further used for Determining a voice operation command according to the voice input information, And providing a three-dimensional stereoscopic display unit with the three-dimensional stereo interface signal after executing the voice operation command; and providing the sound processing unit with the voice playback information related to the three-dimensional stereo interface display signal.
5、 根据权利要求 1 所述的用户交互系统, 其特征在于, 所述三维立体显示 单元为头戴式设备。  The user interaction system according to claim 1, wherein the three-dimensional stereoscopic display unit is a head mounted device.
6、 根据权利要求 3 所述的用户交互系统, 其特征在于, 所述三维立体显示 单元和视角感知单元在物理上集成为便携式用户可佩戴整体。  6. The user interaction system according to claim 3, wherein the three-dimensional stereoscopic display unit and the perspective perception unit are physically integrated as a portable user wearable unit.
7、 根据权利要求 4 所述的用户交互系统, 其特征在于, 所述三维立体显示 单元、 视角感知单元和声音处理单元在物理上集成为便携式用户可佩戴整体。  7. The user interaction system according to claim 4, wherein the three-dimensional stereoscopic display unit, the viewing angle sensing unit, and the sound processing unit are physically integrated as a portable user wearable unit.
8、 根据权利要求 3 所述的用户交互系统, 其特征在于, 所述三维立体显示 单元、 视角感知单元和动作捕获单元在物理上集成为便携式用户可头戴设备或便 携式可穿戴设备。  8. The user interaction system according to claim 3, wherein the three-dimensional stereoscopic display unit, the viewing angle sensing unit, and the motion capture unit are physically integrated into a portable user wearable device or a portable wearable device.
9、 根据权利要求 4 所述的用户交互系统, 其特征在于, 所述三维立体显示 单元、 视角感知单元、 声音处理单元和动作捕获单元在物理上集成为便携式周户 可头戴设备或便携式可穿戴设备。  9. The user interaction system according to claim 4, wherein the three-dimensional stereoscopic display unit, the viewing angle sensing unit, the sound processing unit, and the motion capture unit are physically integrated into a portable peripheral wearable device or portable Wear the device.
1 0、 根据权利要求 1所述的用户交互系统, 其特征在于, 所述动作捕获单元 为便携式穿戴式设备, 或固定于用户体外可捕获用户动作位置处。  The user interaction system according to claim 1, wherein the motion capture unit is a portable wearable device, or is fixed at a user action location outside the user.
1 1、 根据权利要求 1所述的用户交互系统, 其特征在于,  1 1. The user interaction system according to claim 1, wherein:
所述信息运算处理单元, 进一步用于在所述三维立体界面上显示对应于用户 手的空间虚拟指针元素;  The information operation processing unit is further configured to display a spatial virtual pointer element corresponding to the user's hand on the three-dimensional interface;
动作捕获单元, 用于实时捕获用户响应于浏览该三维立体界面而做出的用户 手位置形态信息;  a motion capture unit, configured to capture, in real time, user hand position shape information that is generated by the user in response to browsing the three-dimensional interface;
信息运算处理单元, 用于根据所述用户手位置形态信息, 确定对应于该用户 手位置形态信息的交互操作命令 , 并实时地输出空间虚拟指针元素的图像信号, 从而实现三维立体界面上的空间虚拟指针元素的运动轨迹与用户手运动轨迹保 持一致, 并实时向用户提供执行该对应于用户手位置形态信息的交互操作命令后 的三维立体界面显示信号。 An information operation processing unit, configured to determine an interaction operation command corresponding to the user hand position pattern information according to the user hand position pattern information, and output an image signal of the space virtual pointer element in real time, thereby realizing a space on the three-dimensional interface The motion trajectory of the virtual pointer element is consistent with the trajectory of the user's hand, and the user is provided with an interactive operation command corresponding to the position information of the user's hand in real time. The three-dimensional interface displays the signal.
12、 一种用户交互方法, 其特征在于, 该方法包括:  12. A user interaction method, the method comprising:
提拱三维立体界面显示信号;  Displaying a three-dimensional interface display signal;
根据所述三维立体界面显示信号向用户显示三维立体界面;  Displaying a three-dimensional stereo interface to the user according to the three-dimensional interface display signal;
捕获用户浏览该三维立体界面做出的肢体空间移动信息;  Capturing the movement information of the limb space made by the user browsing the three-dimensional interface;
确定对应子该用户肢体空间移动信息的交互搡作命令, 并实时提供对应于执 亥交互 4喿作 ·命令后的三维立.体界面显示信号。  An interactive command corresponding to the movement information of the user's limb space is determined, and a three-dimensional vertical body interface display signal corresponding to the command of the interaction is provided in real time.
13、 根据权利要求 12 所述的用户交互方法, 其特征在于, 所述捕获用户浏 览该三维立体界面做出的肢体空间移动信息为: 捕获用户浏览该三维立体界面而 傲出的精确定位操,作和 /或非精确定位操作。  The user interaction method according to claim 12, wherein the capturing of the limb space movement information by the user browsing the three-dimensional interface is: capturing a precise positioning operation that the user browses the three-dimensional interface and is proud of, Do and/or inaccurate positioning operations.
14、 根据权利要求 13 所述的用户交互方法, 其特征在于, 所述 _精确定位操 作包括:  The user interaction method according to claim 13, wherein the _ precise positioning operation comprises:
用户手移动控制空间虛拟指针元素在三维立体界面中的三维方向上自由移 动;  The user hand moves the control space virtual pointer element to move freely in the three-dimensional direction in the three-dimensional interface;
识別出用户手的两种不同状态以及状态变化时对应手的空间虚拟指针元素 在该三维立体界面中的位置, 其中所述手的两种状态包括握拳状态与只伸出食指 的状态;  Identifying two different states of the user's hand and the position of the spatial virtual pointer element of the corresponding hand in the three-dimensional interface when the state changes, wherein the two states of the hand include a state of clenching and a state of extending only the index finger;
点击三维立体界面的按 或  Click on the 3D interface or
选择三维立体界面上的特定区域。  Select a specific area on the 3D interface.
15、 根据权利要求 1 3 所述的用户交互方法, 其特征在于, 所述非精确定位 搡作包括: 手悬停、 手从右向左划动、 手从左向右划动、 手从上到下划动、 手从 下到上到动或两手分开或聚.拢, 或摆手。  15. The user interaction method according to claim 13, wherein the inaccurate positioning operation comprises: hand hovering, hand swiping from right to left, hand swiping from left to right, hand from above Move to the bottom, move your hand from bottom to top, or separate or gather together, or wave your hand.
16、 根据权利要求 12 所述的用户交互方法, 其特征在于, 该方法进一步包  16. The user interaction method according to claim 12, wherein the method further comprises
17、 根据权利要求 12 所述的用户交互方法, 其特征在于, 该方法进一步包 感测用户头部运动信息; 实时提供基于该用户实时视角下的三维立体界面显示信号。 17. The user interaction method according to claim 12, wherein the method further comprises Sensing user head motion information; providing a three-dimensional interface display signal based on the real-time perspective of the user in real time.
18、 根据权利要求 12 所述的用户交互方法, 其特征在于, 该方法进一步包 捕获用户的语音输入信息;  18. The user interaction method according to claim 12, wherein the method further comprises capturing voice input information of the user;
根据所述语音输入信息确定语音搡作命令; Determining a voice command according to the voice input information;
提供执行该语音搡作命令后的三维立体界面信号。 A three-dimensional interface signal is provided after the voice command is executed.
PCT/CN2013/070608 2012-03-19 2013-01-17 User interaction system and method WO2013139181A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210071176.7A CN102789313B (en) 2012-03-19 2012-03-19 User interaction system and method
CN201210071176.7 2012-03-19

Publications (1)

Publication Number Publication Date
WO2013139181A1 true WO2013139181A1 (en) 2013-09-26

Family

ID=47154727

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/070608 WO2013139181A1 (en) 2012-03-19 2013-01-17 User interaction system and method

Country Status (2)

Country Link
CN (1) CN102789313B (en)
WO (1) WO2013139181A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244043A (en) * 2014-09-25 2014-12-24 苏州乐聚一堂电子科技有限公司 Motion-sensing image display system
CN104244041A (en) * 2014-09-25 2014-12-24 苏州乐聚一堂电子科技有限公司 Motion-sensing intelligent image play system
CN104244042A (en) * 2014-09-25 2014-12-24 苏州乐聚一堂电子科技有限公司 Activity sensing image playing system
CN108259738A (en) * 2017-11-20 2018-07-06 优视科技有限公司 Camera control method, equipment and electronic equipment
CN111045510A (en) * 2018-10-15 2020-04-21 中国移动通信集团山东有限公司 Man-machine interaction method and system based on augmented reality
CN112734044A (en) * 2020-11-26 2021-04-30 清华大学 Man-machine symbiosis method and system
CN115390663A (en) * 2022-07-27 2022-11-25 合壹(上海)展览有限公司 Virtual human-computer interaction method, system, equipment and storage medium

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789313B (en) * 2012-03-19 2015-05-13 苏州触达信息技术有限公司 User interaction system and method
CN103905808A (en) * 2012-12-27 2014-07-02 北京三星通信技术研究有限公司 Device and method used for three-dimension display and interaction.
US20140191939A1 (en) * 2013-01-09 2014-07-10 Microsoft Corporation Using nonverbal communication in determining actions
CN103067727A (en) * 2013-01-17 2013-04-24 乾行讯科(北京)科技有限公司 Three-dimensional 3D glasses and three-dimensional 3D display system
CN104063042A (en) * 2013-03-21 2014-09-24 联想(北京)有限公司 Information processing method, device and electronic equipment
CN103226443A (en) * 2013-04-02 2013-07-31 百度在线网络技术(北京)有限公司 Method and device for controlling intelligent glasses and intelligent glasses
CN103530060B (en) * 2013-10-31 2016-06-22 京东方科技集团股份有限公司 Display device and control method, gesture identification method
CN104637079A (en) * 2013-11-07 2015-05-20 江浩 Experience method and experience system based on virtual home furnishing display
CN104134235B (en) * 2014-07-25 2017-10-10 深圳超多维光电子有限公司 Real space and the fusion method and emerging system of Virtual Space
US9652124B2 (en) * 2014-10-31 2017-05-16 Microsoft Technology Licensing, Llc Use of beacons for assistance to users in interacting with their environments
CN104407697A (en) * 2014-11-17 2015-03-11 联想(北京)有限公司 Information processing method and wearing type equipment
CN104503321A (en) * 2014-12-18 2015-04-08 赵爽 Ultralow-power wireless intelligent control system for body sensing or voice control
CN104504623B (en) * 2014-12-29 2018-06-05 深圳市宇恒互动科技开发有限公司 It is a kind of that the method, system and device for carrying out scene Recognition are perceived according to action
CN104536579B (en) * 2015-01-20 2018-07-27 深圳威阿科技有限公司 Interactive three-dimensional outdoor scene and digital picture high speed fusion processing system and processing method
CN105988562A (en) * 2015-02-06 2016-10-05 刘小洋 Intelligent wearing equipment and method for realizing gesture entry based on same
EP3112987A1 (en) * 2015-06-29 2017-01-04 Thomson Licensing Method and schemes for perceptually driven encoding of haptic effects
US10057078B2 (en) * 2015-08-21 2018-08-21 Samsung Electronics Company, Ltd. User-configurable interactive region monitoring
CN105159450B (en) * 2015-08-25 2018-01-05 中国运载火箭技术研究院 One kind is portable can interactive desktop level virtual reality system
CN105704468B (en) * 2015-08-31 2017-07-18 深圳超多维光电子有限公司 Stereo display method, device and electronic equipment for virtual and reality scene
CN105704478B (en) * 2015-08-31 2017-07-18 深圳超多维光电子有限公司 Stereo display method, device and electronic equipment for virtual and reality scene
CN105867600A (en) * 2015-11-06 2016-08-17 乐视移动智能信息技术(北京)有限公司 Interaction method and device
CN105630157A (en) * 2015-11-27 2016-06-01 东莞酷派软件技术有限公司 Control method, control device, terminal and control system
CN105301789A (en) * 2015-12-09 2016-02-03 深圳市中视典数字科技有限公司 Stereoscopic display device following human eye positions
CN105915877A (en) * 2015-12-27 2016-08-31 乐视致新电子科技(天津)有限公司 Free film watching method and device of three-dimensional video
CN105578174B (en) * 2016-01-26 2018-08-24 神画科技(深圳)有限公司 Interactive 3D display system and its 3D rendering generation method
US10976809B2 (en) * 2016-03-14 2021-04-13 Htc Corporation Interaction method for virtual reality
CN105867613A (en) * 2016-03-21 2016-08-17 乐视致新电子科技(天津)有限公司 Head control interaction method and apparatus based on virtual reality system
CN105955483A (en) * 2016-05-06 2016-09-21 乐视控股(北京)有限公司 Virtual reality terminal and visual virtualization method and device thereof
CN105975083B (en) * 2016-05-27 2019-01-18 北京小鸟看看科技有限公司 A kind of vision correction methods under reality environment
CN106095108B (en) * 2016-06-22 2019-02-05 华为技术有限公司 A kind of augmented reality feedback method and equipment
CN106200942B (en) * 2016-06-30 2022-04-22 联想(北京)有限公司 Information processing method and electronic equipment
CN106249886B (en) * 2016-07-27 2019-04-16 Oppo广东移动通信有限公司 The display methods and device of menu
CN107688573A (en) * 2016-08-04 2018-02-13 刘金锁 It is a kind of based on internet+interaction, visualization system and its application method
CN106390454A (en) * 2016-08-31 2017-02-15 广州麦驰网络科技有限公司 Reality scene virtual game system
CN106980362A (en) 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario
CN106843465A (en) * 2016-10-18 2017-06-13 朱金彪 The operating method and device of Three-dimensional Display and use its glasses or the helmet
CN106502407A (en) * 2016-10-25 2017-03-15 宇龙计算机通信科技(深圳)有限公司 A kind of data processing method and its relevant device
CN106527709B (en) * 2016-10-28 2020-10-02 Tcl移动通信科技(宁波)有限公司 Virtual scene adjusting method and head-mounted intelligent device
CN106648071B (en) * 2016-11-21 2019-08-20 捷开通讯科技(上海)有限公司 System is realized in virtual reality social activity
CN106681497A (en) * 2016-12-07 2017-05-17 南京仁光电子科技有限公司 Method and device based on somatosensory control application program
CN106648096A (en) * 2016-12-22 2017-05-10 宇龙计算机通信科技(深圳)有限公司 Virtual reality scene-interaction implementation method and system and visual reality device
CN106873995B (en) * 2017-02-10 2021-08-17 联想(北京)有限公司 Display method and head-mounted electronic equipment
CN106951070A (en) * 2017-02-28 2017-07-14 上海创功通讯技术有限公司 It is a kind of to pass through VR equipment and the method and display system of virtual scene interaction
CN107016733A (en) * 2017-03-08 2017-08-04 北京光年无限科技有限公司 Interactive system and exchange method based on augmented reality AR
CN107122045A (en) * 2017-04-17 2017-09-01 华南理工大学 A kind of virtual man-machine teaching system and method based on mixed reality technology
CN109427100A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of assembling fittings method and system based on virtual reality
CN110120229A (en) * 2018-02-05 2019-08-13 北京三星通信技术研究有限公司 The processing method and relevant device of Virtual Reality audio signal
CN108416420A (en) * 2018-02-11 2018-08-17 北京光年无限科技有限公司 Limbs exchange method based on visual human and system
CN108452511A (en) * 2018-03-20 2018-08-28 广州市博顿运动装备股份有限公司 A kind of smart motion monitoring method
TWI702548B (en) * 2018-04-23 2020-08-21 財團法人工業技術研究院 Controlling system and controlling method for virtual display
CN108900698A (en) * 2018-05-31 2018-11-27 努比亚技术有限公司 Method, wearable device, terminal and the computer storage medium of controlling terminal
DE102019202462A1 (en) * 2019-02-22 2020-08-27 Volkswagen Aktiengesellschaft Portable terminal
CN111930231B (en) * 2020-07-27 2022-02-25 歌尔光学科技有限公司 Interaction control method, terminal device and storage medium
WO2023178586A1 (en) * 2022-03-24 2023-09-28 深圳市闪至科技有限公司 Human-computer interaction method for wearable device, wearable device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202003298U (en) * 2010-12-27 2011-10-05 韩旭 Three-dimensional uncalibrated display interactive device
CN202067213U (en) * 2011-05-19 2011-12-07 上海科睿展览展示工程科技有限公司 Interactive three-dimensional image system
CN102789313A (en) * 2012-03-19 2012-11-21 乾行讯科(北京)科技有限公司 User interaction system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4735993B2 (en) * 2008-08-26 2011-07-27 ソニー株式会社 Audio processing apparatus, sound image localization position adjusting method, video processing apparatus, and video processing method
US20110213664A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202003298U (en) * 2010-12-27 2011-10-05 韩旭 Three-dimensional uncalibrated display interactive device
CN202067213U (en) * 2011-05-19 2011-12-07 上海科睿展览展示工程科技有限公司 Interactive three-dimensional image system
CN102789313A (en) * 2012-03-19 2012-11-21 乾行讯科(北京)科技有限公司 User interaction system and method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104244043A (en) * 2014-09-25 2014-12-24 苏州乐聚一堂电子科技有限公司 Motion-sensing image display system
CN104244041A (en) * 2014-09-25 2014-12-24 苏州乐聚一堂电子科技有限公司 Motion-sensing intelligent image play system
CN104244042A (en) * 2014-09-25 2014-12-24 苏州乐聚一堂电子科技有限公司 Activity sensing image playing system
CN108259738A (en) * 2017-11-20 2018-07-06 优视科技有限公司 Camera control method, equipment and electronic equipment
CN111045510A (en) * 2018-10-15 2020-04-21 中国移动通信集团山东有限公司 Man-machine interaction method and system based on augmented reality
CN112734044A (en) * 2020-11-26 2021-04-30 清华大学 Man-machine symbiosis method and system
CN115390663A (en) * 2022-07-27 2022-11-25 合壹(上海)展览有限公司 Virtual human-computer interaction method, system, equipment and storage medium
CN115390663B (en) * 2022-07-27 2023-05-26 上海合壹未来文化科技有限公司 Virtual man-machine interaction method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN102789313B (en) 2015-05-13
CN102789313A (en) 2012-11-21

Similar Documents

Publication Publication Date Title
WO2013139181A1 (en) User interaction system and method
EP3549109B1 (en) Virtual user input controls in a mixed reality environment
CN114341779B (en) Systems, methods, and interfaces for performing input based on neuromuscular control
CN102779000B (en) User interaction system and method
CN104410883B (en) The mobile wearable contactless interactive system of one kind and method
CN102789312B (en) A kind of user interactive system and method
US20160098094A1 (en) User interface enabled by 3d reversals
CN106527709B (en) Virtual scene adjusting method and head-mounted intelligent device
WO2012119371A1 (en) User interaction system and method
CN104298340A (en) Control method and electronic equipment
CN106648068A (en) Method for recognizing three-dimensional dynamic gesture by two hands
CN107291221A (en) Across screen self-adaption accuracy method of adjustment and device based on natural gesture
WO2021073743A1 (en) Determining user input based on hand gestures and eye tracking
Zhao et al. Comparing head gesture, hand gesture and gamepad interfaces for answering Yes/No questions in virtual environments
CN109828672A (en) It is a kind of for determining the method and apparatus of the human-machine interactive information of smart machine
Vyas et al. Gesture recognition and control
WO2023227072A1 (en) Virtual cursor determination method and apparatus in virtual reality scene, device, and medium
CN110717993B (en) Interaction method, system and medium of split type AR glasses system
WO2022179279A1 (en) Interaction method, electronic device, and interaction system
CN115026817A (en) Robot interaction method, device, electronic equipment and storage medium
CN109582136B (en) Three-dimensional window gesture navigation method and device, mobile terminal and storage medium
KR102156175B1 (en) Interfacing device of providing user interface expoliting multi-modality and mehod thereof
KR102612430B1 (en) System for deep learning-based user hand gesture recognition using transfer learning and providing virtual reality contents
KR101605740B1 (en) Method for recognizing personalized gestures of smartphone users and Game thereof
CN109976518A (en) A kind of man-machine interaction method based on PVDF sensor array

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13764528

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13764528

Country of ref document: EP

Kind code of ref document: A1