CN109960404B - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN109960404B
CN109960404B CN201910116185.5A CN201910116185A CN109960404B CN 109960404 B CN109960404 B CN 109960404B CN 201910116185 A CN201910116185 A CN 201910116185A CN 109960404 B CN109960404 B CN 109960404B
Authority
CN
China
Prior art keywords
input data
different
data
relative position
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910116185.5A
Other languages
Chinese (zh)
Other versions
CN109960404A (en
Inventor
张印帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910116185.5A priority Critical patent/CN109960404B/en
Publication of CN109960404A publication Critical patent/CN109960404A/en
Application granted granted Critical
Publication of CN109960404B publication Critical patent/CN109960404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a data processing method, which comprises the following steps: acquiring first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body; determining a relative position between the different operation body or the different part and a data acquisition device based on the first input data and the second input data; based on the relative position, when the first input data and the second input data meet the instruction generation condition, generating an instruction corresponding to the first input data and the second input data; the instructions are executed. The invention also discloses a data processing device.

Description

Data processing method and device
Technical Field
The present application relates to data processing technologies, and in particular, to a data processing method and apparatus.
Background
With the development of Augmented Reality (AR) technology and Virtual Reality (VR) technology, more and more terminals (e.g., mobile phones, smart glasses, desktop computers) start to display 3D content on their own display screens, and users can interact with the terminals displaying the 3D content by wearing AR devices or VR devices.
However, since the operation range of the AR device or the VR device in the prior art is fixed, so that the operation range of the user is very limited, the user must place both hands at fixed positions within the visual field of the AR device or the VR device and perform a fixed gesture motion to realize the interaction between the user and the terminal displaying the 3D content, so that the interaction between the user and the terminal is difficult to be completed for some users unfamiliar with gesture motions and instructions. And when the user places both hands in a fixed posture for a long time, the user is easy to feel tired, the user experience is influenced, and the satisfaction degree of the user on the AR equipment or the VR equipment is reduced.
Disclosure of Invention
In order to achieve the above purpose, the technical solution of the embodiment of the present application is implemented as follows:
according to an aspect of an embodiment of the present invention, there is provided a data processing method, including:
acquiring first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body;
determining a relative position between the different operation body or the different part and a data acquisition device based on the first input data and the second input data;
based on the relative position, when the first input data and the second input data meet the instruction generation condition, generating an instruction corresponding to the first input data and the second input data;
the instructions are executed.
In the foregoing solution, before the acquiring the first input data and the second input data, the method further includes:
detecting a contact state of the different operation bodies or the different parts with the data acquisition equipment;
accordingly, the acquiring the first input data and the second input data comprises:
and acquiring the first input data and the second input data by adopting different types of sensors in the data acquisition equipment when at least one of the different operation bodies or at least one of the different parts is determined to be in contact with the data acquisition equipment based on the contact state.
In the above scheme, before the acquiring the first input data and the second input data, the method includes:
establishing a mapping relation of each relative position in a real space and a virtual space based on N relative positions between the different operation bodies or the different parts and the data acquisition equipment; wherein N is greater than or equal to 1;
accordingly, determining that the first input data and the second input data satisfy an instruction generation condition based on the relative position includes:
acquiring a first mapping relation of the relative position in the real space and the virtual space based on the relative position;
matching the first mapping relation with the mapping relation of each relative position in the mapping library;
and according to a matching result, when the first mapping relation is successfully matched with the mapping relation of each relative position in the mapping library, determining that the first input data and the second input data meet the instruction generation condition.
In the foregoing solution, the generating the instruction corresponding to the first input data and the second input data includes:
generating different instructions corresponding to the first input data and the second input data according to different input forms adopted by different operation bodies or different parts;
or generating the first input data and the second input data according to different targets of the different operation bodies or the different parts to generate different corresponding instructions.
In the above solution, the generating the first input data and the second input data according to the different objects targeted by the different operation bodies or the different parts has different corresponding instructions, including:
determining an object for which the different operation body or the different part is directed based on the first input data and the second input data;
matching the object with a preset object;
and when the object is determined to be the preset object according to the matching result, generating an instruction corresponding to the object by using the first input data and the second input data.
According to another aspect of the embodiments of the present invention, there is provided a data processing apparatus, including:
an acquisition unit configured to acquire first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body;
a determination unit configured to determine a relative position between the different operation body or the different portion and a data acquisition apparatus based on the first input data and the second input data;
the generating unit is used for generating an instruction corresponding to the first input data and the second input data when the first input data and the second input data meet the instruction generating condition based on the relative position;
an execution unit to execute the instruction.
In the above scheme, the apparatus further comprises:
a detection unit configured to detect a contact state of the different operation body or the different portion with the data acquisition device;
the acquisition unit is configured to acquire the first input data and the second input data by using different types of sensors in the data acquisition device when it is determined that at least one of the different operation bodies or at least one of the different portions is in contact with the data acquisition device based on the contact state.
In the above scheme, the apparatus further comprises:
the establishing unit is used for establishing the mapping relation of each relative position in a real space and a virtual space based on the N relative positions between the different operation bodies or the different parts and the data acquisition equipment; wherein N is greater than or equal to 1;
the obtaining unit is further configured to obtain a first mapping relation of the relative position in the real space and the virtual space based on the relative position;
the matching unit is used for matching the first mapping relation with the mapping relation of each relative position in the mapping library;
the determining unit is specifically configured to determine that the first input data and the second input data satisfy the instruction generating condition when it is determined that the first mapping relationship is successfully matched with the mapping relationship of each relative position in the mapping library according to the matching result.
In the above scheme, the apparatus further comprises:
the generating unit is used for generating different instructions corresponding to the first input data and the second input data according to different input modes adopted by different operation bodies or different parts; or generating the first input data and the second input data according to different targets of the different operation bodies or the different parts to generate different corresponding instructions.
According to a third aspect of embodiments of the present invention, there is provided a data processing apparatus, the apparatus comprising: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor is configured to execute the steps of any one of the above data processing methods when the computer program is executed.
According to the data processing method and device, first input data and second input data are obtained; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body; determining a relative position between the different operation body or the different part and a data acquisition device based on the first input data and the second input data; based on the relative position, when the first input data and the second input data meet the instruction generation condition, generating an instruction corresponding to the first input data and the second input data; the instructions are executed. In this way, the corresponding instruction is generated by the relative position between different operation bodies or different parts and the data acquisition equipment, so that the operation range of the user is not limited, and the fatigue feeling of the user caused by the fact that the user needs to interact in a limited fixed space in the traditional technology can be greatly reduced.
Drawings
Fig. 1 is a schematic flow chart of a data processing method provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of three interaction modes based on a smart pen and a second device in the embodiment of the present invention;
fig. 3 is a first schematic structural diagram of a data processing apparatus according to an embodiment of the present invention;
FIG. 4 is a second schematic diagram illustrating a structure of a data processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
So that the manner in which the features and aspects of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
Fig. 1 is a schematic flow chart of a data processing method provided in an embodiment of the present invention, as shown in fig. 1, the method includes:
step 101, acquiring first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body;
in the embodiment of the invention, the method is mainly applied to the first equipment. This first equipment can be hand-held type equipment such as handle, mouse, trackball, cell-phone, smart pen, can also be wearing formula equipment such as intelligent wrist-watch, intelligent ring, intelligent bracelet, intelligent gloves, etc..
The following describes the embodiment of the present invention in detail by taking the first device as an example:
when a user holds the intelligent pen with the right hand, the intelligent pen can detect the skin surface muscle of one finger (hereinafter referred to as a holding hand) of the right hand or the right hand through the electromyographic sensor arranged in the intelligent pen to obtain the electromyographic signal of the holding hand, and the detected electromyographic signal is compared with the preset electromyographic signal to obtain a comparison result. And when the comparison result represents that the detected electromyographic signal is greater than the preset electromyographic signal, determining that the intelligent pen and the holding hand are in a contact state currently.
When the smart pen determines that the smart pen is in contact with the holding hand based on the contact state, different types of sensors within the smart pen may be employed to acquire first input data and second input data generated by the user.
Here, the first input data and the second input data may be generated by different operators, or may be generated by different parts of the same operator. For example, the different operation bodies are the right hand (i.e., the holding hand) and the left hand (i.e., the non-holding hand) of the user, respectively; and the different parts of the same operating body are respectively the thumb and other fingers except the thumb of the holding hand.
When the hand is held and the second device is written and drawn on the two-dimensional plane such as the desktop or the screen by using the intelligent pen, the intelligent pen can sense the touch data generated when the intelligent pen is in contact with the two-dimensional plane through the touch sensor arranged at the pen point of the intelligent pen. And the acceleration sensor arranged in the intelligent pen can measure the acceleration generated by the intelligent pen in different directions and obtain acceleration data. And then, the obtained acceleration data can be calculated by utilizing double integral to obtain a three-axis coordinate of the intelligent pen on the two-dimensional plane, so that the data such as the inclination angle, the rotation action, the moving direction and the like of the intelligent pen on the two-dimensional plane can be determined through the change of the three-axis coordinate. Here, the three-axis coordinates and the touch data generated by the smart pen on the two-dimensional plane may be referred to as the first input data in the method. When the non-holding hand performs gesture operation on the second device, the second device can acquire gesture data of the non-holding hand through the depth camera and obtain gesture depth information of the non-holding hand, wherein the gesture depth information comprises gesture image information of the non-holding hand and relative position information of the non-holding hand and the second device. Here, the gesture depth information may be referred to as second input data in the method.
When a holding hand utilizes the intelligent pen to perform pen control operation on the second equipment in a three-dimensional space, the intelligent pen can measure the acceleration and the angular velocity of the intelligent pen in different directions through the attitude sensor arranged in the intelligent pen, and a measurement result is obtained. And calculating the acceleration data and the angular velocity data in the measurement result to obtain the attitude information of the intelligent pen in the three-dimensional space. Here, the posture information may be the first input data. When the non-holding hand performs gesture operation on the second device in the three-dimensional space, the intelligent pen can acquire the gesture of the non-holding hand through the depth camera arranged in the intelligent pen to obtain gesture depth information of the non-holding hand, wherein the gesture depth information comprises gesture image information of the non-holding hand and relative position information of the non-holding hand and the intelligent pen. Here, the gesture depth information may be referred to as second input data in the method.
The second device may also have multiple microphones disposed therein, each at a different location. When the second device sends out ultrasonic signals, the intelligent pen can also receive the ultrasonic signals sent out by each microphone simultaneously through the ultrasonic sensor arranged in the intelligent pen. Due to the different positions of the microphones, the intelligent pen has a phase difference and/or a signal receiving time difference between each ultrasonic signal received by the ultrasonic sensor. And then determining coordinate position information of the intelligent pen relative to the second equipment according to the phase difference and/or the signal receiving time difference. Here, the coordinate position information may be referred to as first input data in the method.
Here, the second device may be a tablet computer, a desktop computer, a smart phone, or the like having a touch screen.
Step 102, determining the relative position between the different operation body or the different part and the data acquisition equipment based on the first input data and the second input data;
here, since the coordinate information of the smart pen can be determined according to the first input data, when the smart pen collects a gesture of a non-holding hand through a depth camera disposed in the smart pen, a pre-designed pattern can be projected to the non-holding hand at a position indicated by the coordinate information of the smart pen as a reference image (coded light source), and structured light of the reference image is projected to the non-holding hand, and then the structured light pattern reflected by the surface of the non-holding hand is received. Then, the relative position between the non-holding hand and the holding hand in the real space can be determined based on the coordinate information of the smart pen and the coordinate information of the non-holding hand.
103, based on the relative position, when the first input data and the second input data are determined to meet an instruction generation condition, generating an instruction corresponding to the first input data and the second input data;
in the embodiment of the invention, after the smart pen determines the relative position between the non-holding hand and the holding hand in the real space according to the coordinate information of the smart pen and the coordinate information of the non-holding hand, the virtual relative position information of the relative position between the non-holding hand and the holding hand in the real space in the virtual space can be obtained from the mapping library based on the mapping relation of the relative positions in the real space and the virtual space. Then, the instruction corresponding to the virtual relative position information is obtained from the instruction library based on the mapping relation between the virtual relative position information and the instruction, and the obtaining result is obtained. When the obtained result represents that the instruction fails to be obtained, representing that the current first input data and the second input data do not meet the instruction generation condition, and not generating any instruction; and if the acquisition result represents that the instruction is successfully acquired, representing that the first input data and the second input data meet the instruction generation condition, and generating the instruction corresponding to the first input data and the second input data.
Here, the mapping relationship between each relative position in the mapping library in the real space and the virtual space may be specifically established based on the relative positions of N real spaces and virtual spaces generated for the first time between different operators or different parts of the same operator and the smart pen, where N is greater than or equal to 1.
According to the embodiment of the invention, the corresponding instruction of the relative position in the virtual space is generated through the relative position between the holding hand and the non-holding hand in the real space, so that on one hand, the user can realize the interaction process with the second device without intentionally memorizing the fixed gesture; on the other hand, the relative position between the holding hand and the non-holding hand in the real space is utilized to interact with the second equipment, so that the real free space interaction can be realized, and the fatigue of the user caused by the fact that the interaction must be carried out in the limited fixed space in the traditional technology is greatly reduced.
In the embodiment of the invention, when the smart pen receives second input data generated by inputting a gesture by a non-holding hand, the second input data comprises gesture data and relative position data between the non-holding hand and the holding hand, so that the gesture data can be extracted from the second input data, and then the gesture data is matched with gesture data prestored in a gesture library to obtain a matching result. When the matching result represents that the gesture matching fails, representing that the second input data does not meet the instruction generation condition, and not generating any instruction; and when the matching result represents that the gesture matching is successful, representing that the second input data meets the instruction generating condition, and acquiring the instruction corresponding to the gesture from the instruction library based on the mapping relation between each gesture and the instruction.
In the embodiment of the present invention, when the smart pen generates the instruction corresponding to the first input data and the second input data, different instructions may be generated according to different input forms of the holding hand and the non-holding hand.
Here, the input form includes a two-dimensional plane input form, a three-dimensional space input form, a two-handed input form in a three-dimensional space, and the like.
For example, when the holding hand performs pen-controlled operation on the second device on the two-dimensional plane, the smart pen can detect touch data generated by the contact of the smart pen and the two-dimensional plane through the touch sensor. So that the current input form can be determined to be a two-dimensional plane input form according to the touch data, and then an instruction for interacting with the two-dimensional data in the second device is generated. For example, the instruction may be a writing instruction or a drawing instruction for two-dimensional data input in the second device, or the like.
When the holding hand performs pen-controlled operation on the second device in the three-dimensional space, the smart pen can detect gesture data of the smart pen in the three-dimensional space through the gesture sensor, so that the current input form can be determined to be the three-dimensional space input form according to the gesture data, and then an instruction for interacting with the three-dimensional data in the second device is generated. For example, the instruction may be a model splitting instruction, a model assembling instruction, a model rotating instruction, or the like, which is input for the three-dimensional model in the second device.
When the intelligent pen detects gesture data of the intelligent pen in a three-dimensional space through the gesture sensor, gesture depth data generated by gestures input by a non-holding hand in the three-dimensional space are collected through a depth camera in the intelligent pen, and the current input form can be determined to be a three-dimensional space double-hand matching input form according to relative position information between the non-holding hand and the holding hand carried in the gesture depth data. An instruction is generated to control the second device simultaneously in cooperation with both hands. For example, the instruction may be a volume up instruction or a stop play instruction for the in-vehicle system, or the like.
In the embodiment of the invention, the intelligent pen can also generate different instructions according to different specific objects of the holding hand and the non-holding hand.
Here, the object for which the holding hand and the non-holding hand are directed may specifically be a third device, and for example, the third device may be an information input device such as a keyboard, a tablet, or the like.
When the non-holding hand performs typing operation or sliding operation on the keyboard, the gesture depth information of the non-holding hand can be collected by the depth camera on the intelligent pen, and the gesture depth information comprises a keyboard image and a gesture image. And then matching the gesture image with a gesture image prestored in a gesture library to obtain a matching result. And when the gesture image is successfully matched according to the matching result, matching the keyboard image with a preset device image in a device library to obtain a matching result. And when the matching result represents that the keyboard image is successfully matched, acquiring the instruction corresponding to the gesture image and the keyboard image according to the mapping relation of the gesture image, the keyboard image and the instruction. For example, the instruction may be a text deletion instruction for keyboard entry, a scroll bar pull-up instruction, a scroll bar pull-down instruction, or the like.
Step 104, executing the instruction.
In the embodiment of the present invention, after the smart pen generates the instruction corresponding to the first input data and the second input data based on the relative positions between different operators or different parts of the same operator and the smart pen, the instruction may be sent to a corresponding application in the second electronic device, so as to start the application through the instruction; or to perform a processing operation on an object in the launched application.
In the embodiment of the invention, the corresponding instruction is generated through the relative position information of the two hands of the user in the real space, so that the purpose of interaction in the real free space can be realized without being restricted by the fixed space when the user interacts with the second device, and the fatigue caused by the fact that the user often performs gesture input in the fixed space can be reduced. In addition, because the gestures in the gesture library are any gestures which accord with natural interaction and are generated by the interaction of both hands of the user with the second equipment for the first time, namely any interaction gestures which are reminiscent of the user by feeling, the cognitive load of the user on fixed gestures can be greatly reduced, and the interaction naturalness between the user and the equipment is increased.
For example, gestures consistent with natural interactions may include, two-handed grabbing, finger pinching, one-handed holding, two-handed moving, two-handed rotating, and the like.
In the embodiment of the invention, the intelligent pen can also switch the current interaction mode with the second device according to different types of data. Here, the types of data include three-dimensional data and two-dimensional data.
For example, a two-dimensional interaction mode is adopted between the smart pen and the second device, when the smart pen receives first input data input by a user currently, the type of the first input data can be compared with a history type closest to the current time to obtain a comparison result, and the interaction mode corresponding to the type of the first input data is directly switched to according to the comparison result when the type of the first input data is determined to be different from the history type. For example, the type of the first input data is a three-dimensional data type, and the history type is a two-dimensional data type. And when the two types are determined to be different, directly switching the two-dimensional interaction mode corresponding to the two-dimensional data type to the three-dimensional interaction mode corresponding to the three-dimensional data type. Therefore, the intelligent pen can be directly and smoothly converted from the two-dimensional space interaction mode to the three-dimensional space interaction mode, so that the intelligent pen is more convenient for a user to perform different forms of gesture input, and the user experience is better.
Fig. 2 is a schematic diagram of three interaction modes based on a smart pen and a second device in an embodiment of the present invention, and as shown in fig. 2, the interaction mode includes a first device (smart pen) 201 and a second device 202, where the first device 201 and a holding hand 203 are in a contact state, and a non-holding hand 204 is used for performing gesture operation on the second device. The interaction pattern represented by the straight line symbol is a two-dimensional interaction form implemented on the second device 202 by the holding hand 203 and the non-holding hand 204 on a two-dimensional plane.
When the first device 201 is a smart pen, the holding hand 203 can perform pen-controlled operation on the second device 202 on a two-dimensional plane by using the first device 201. When the holding hand 203 performs pen-controlled operation on the second device 202 on the two-dimensional plane by using the first device 201, the touch sensor provided at the pen point of the first device 201 can detect touch data generated when the first device 201 is in contact with the two-dimensional plane. And the acceleration generated by the first device 201 in different directions can be measured by an acceleration sensor arranged in the first device 201, and acceleration data can be obtained. The obtained acceleration data may then be calculated by using double integral to obtain three-axis coordinates of the first device 201 on the two-dimensional plane, so that data such as the tilt angle, the rotation motion, and the moving direction of the first device 201 on the two-dimensional plane may be determined by the change of the three-axis coordinates. When the non-holding hand 204 performs a gesture operation on the second device 202, the second device 202 may acquire gesture data of the non-holding hand 204 through the depth camera, and obtain gesture depth information of the non-holding hand 204, where the gesture depth information includes gesture image information of the non-holding hand 204 and relative position information of the non-holding hand 204 and the second device 202. In this manner, two-dimensional interaction between the user's hands and the second device 202 can be achieved.
The interaction pattern symbolized by the dashed line is a three-dimensional interaction pattern implemented in three-dimensional space for the second device 202 by the holding hand 203 and the non-holding hand 204. When the holding hand 203 performs pen-controlled operation on the second device 202 in a three-dimensional space, the first device 201 may measure accelerations and angular velocities generated in different directions by the first device 201 through a posture sensor disposed in the first device 201, and obtain a measurement result. By calculating the acceleration data and the angular velocity data in the measurement result, the attitude information of the first device 201 in the three-dimensional space is obtained. When the non-holding hand 204 performs gesture operation on the second device 202 in a three-dimensional space, the first device 201 may acquire a gesture of the non-holding hand 204 through a depth camera arranged in the first device 201, so as to obtain gesture depth information of the non-holding hand 204, where the gesture depth information includes gesture image information of the non-holding hand 204 and relative position information of the non-holding hand 204 and the first device 201. Therefore, the three-dimensional interaction between the two hands of the user and the second equipment can be realized.
The interaction mode represented by the two-line symbol is a three-dimensional gesture interaction form implemented on the second device 202 by the cooperation of the holding hand 203 and the non-holding hand 204 in a three-dimensional space. When the holding hand 203 and the non-holding hand 204 cooperate in a three-dimensional space to perform gesture operation on the second device 202, the first device 201 may detect gesture information of the first device 201 through a gesture sensor disposed in the first device 201, so that coordinate information of the first device 201 may be determined according to the detected gesture information, and when the first device 201 acquires a gesture of the non-holding hand 204 through a depth camera disposed in the first device 201, a pre-designed pattern may be projected to the non-holding hand 204 as a reference image (coded light source) at a position corresponding to the coordinate information based on the coordinate information of the first device 201, and structured light of the reference image is projected to the non-holding hand 204, and then, a structured light pattern reflected by a surface of the non-holding hand 204 is received, since the received structured light pattern may be deformed due to a stereoscopic shape of the surface of the non-holding hand 204, therefore, the coordinate information of the non-holding hand 204 in the three-dimensional space can be determined by the position and the degree of deformation of the received structured light pattern on the depth camera. Then, the relative position between the non-holding hand 204 and the holding hand 203 in the real space can be determined based on the coordinate information of the first device 201 and the coordinate information of the non-holding hand 204. Based on the mapping relationship of the relative position in the real space and the virtual space, the virtual relative position information of the relative position between the non-holding hand 204 and the holding hand 203 in the real space in the virtual space is obtained from the mapping library. Then, an instruction corresponding to the virtual relative position information is obtained from an instruction library based on the mapping relation between the virtual relative position information and the instruction, and the instruction is executed. Thus, space interaction with the second device by means of two-hand cooperation in a three-dimensional space can be achieved.
Fig. 3 is a first schematic structural diagram of a data processing apparatus according to an embodiment of the present invention, as shown in fig. 3, the apparatus includes:
a data acquisition unit 301 for acquiring first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body;
a determination unit 302 configured to determine a relative position between the different operation body or the different portion and a data acquisition apparatus based on the first input data and the second input data;
an instruction generating unit 303, configured to generate an instruction corresponding to the first input data and the second input data when determining that the first input data and the second input data satisfy an instruction generating condition based on the relative position;
an execution unit 304 to execute the instructions.
In the embodiment of the present invention, the apparatus further includes:
a detection unit 305 for detecting a contact state of the different operation body or the different portion with the data acquisition apparatus;
the data acquiring unit 301 is specifically configured to acquire the first input data and the second input data by using different types of sensors in the data acquiring device when it is determined that at least one of the different operation bodies or at least one of the different portions is in contact with the data acquiring device based on the contact state.
In the embodiment of the present invention, the apparatus further includes:
an establishing unit 306, configured to establish a mapping relationship between each relative position in a real space and a virtual space based on N relative positions between the different operation bodies or the different portions and the data acquisition device; wherein N is greater than or equal to 1;
the data acquiring unit 301 is further configured to acquire a first mapping relationship between the relative position in the real space and the virtual space based on the relative position;
a matching unit 307, configured to match the first mapping relationship with the mapping relationships of the respective relative positions in the mapping library in the real space and the virtual space;
the determining unit 302 is further configured to determine, according to a matching result, that the first input data and the second input data satisfy the instruction generating condition when it is determined that the mapping relationship corresponding to the first mapping relationship and the at least one relative position in the mapping library is successfully matched.
The instruction generating unit 303 is specifically configured to generate different instructions corresponding to the first input data and the second input data according to different input forms adopted by the different operation bodies or the different parts; or generating the first input data and the second input data according to different targets of the different operation bodies or the different parts to generate different corresponding instructions.
When the instruction generation unit 303 generates different instructions for generating the first input data and the second input data according to different objects for the different operation bodies or the different parts, the determination unit 302 may specifically determine the objects for the different operation bodies or the different parts based on the first input data and the second input data; then the matching unit 307 matches the object with a preset object; the instruction generating unit 303 generates an instruction corresponding to the object from the first input data and the second input data when determining that the object is the preset object according to the matching result.
In the embodiment of the invention, the device can be a hand-held device such as a handle, a mouse, a track ball, a mobile phone, an intelligent pen and the like, and can also be a wearable device such as an intelligent watch, an intelligent ring, an intelligent bracelet, an intelligent glove and the like. By adopting handheld or wearable electronic equipment as sensing equipment and based on the relative spatial position relationship between different operation bodies or different parts of the same operation body and the device, corresponding instructions are generated, and the sensing equipment is endowed with mobility. And the gesture of the relative spatial position between the hand and the sensor in the device can be directly used as input data, so that the free space interaction between the two hands and the electronic equipment can be realized in the real sense, and the fatigue feeling generated by a user due to the adoption of a fixed space interaction mode in VR equipment in the traditional technology is greatly reduced.
Here, the gesture interaction using the relative spatial positional relationship may be a gesture of zooming, rotating, translating, or the like performed by both hands at an arbitrary position.
Since it is difficult for a user to perceive the size of a target object through direct observation of a three-dimensional object when performing a zoom operation on the target object, when the user performs a grab operation on the target object by using the prior art, the size and distance that the user thinks are often inconsistent with the size and distance of an actual object, thereby causing a situation that the user cannot grab the target object. In the embodiment of the invention, the user holds or wears the electronic equipment, and executes the corresponding instruction through the relative spatial position between the hand holding the electronic equipment and the hand not holding the electronic equipment, and as the interaction mode is to take the body of the user as a spatial input dimension, the user can use any gesture to realize two-dimensional interaction and three-dimensional interaction, so that the user can be allowed to grab an object in the scale of any two hands. Therefore, the difficulty of the user in recognizing the three-dimensional space is simplified, and the load of the user for memorizing the fixed gesture is reduced.
In the embodiment of the invention, because the electronic equipment held or worn by the user can generate displacement along with the movement of the operation body, the interactive space position of the user for the target object also displaces along with the movement, and therefore, the coordinate change between the two hands of the user and the three-dimensional space does not need to independently occupy the cognitive load of the user. Based on coordinate change data established by a user for grabbing an object for the first time, the user forms a mapping relation of the two hands of the user in a three-dimensional space in an autonomous coordinate establishing mode, and gesture data generated by the two hands of the user in the three-dimensional space is determined through the mapping relation, so that a corresponding gesture instruction is generated.
It should be noted that: in the data processing apparatus provided in the above embodiment, when implementing gesture interaction with a device having two-dimensional data and three-dimensional data, only the division of the program modules is illustrated, and in practical applications, the processing may be distributed to different program modules according to needs, that is, the internal structure of the data processing apparatus may be divided into different program modules to complete all or part of the processing described above. In addition, the data processing apparatus provided in the above embodiment and the data processing method embodiment belong to the same concept, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
Fig. 4 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention, and as shown in fig. 4, the apparatus includes: the system comprises a communication module 401, a space positioning module 402, a gesture sensing module 403, a gesture sensing module 404 and a touch sensing module 405.
The system comprises a space positioning module 402, a gesture sensing module 403, a gesture sensing module 404 and a touch sensing module 405, wherein the space positioning module is used for acquiring first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body;
a communication module 401, configured to determine, based on the first input data and the second input data, a relative position between the different operation body or the different part and the data acquisition device;
based on the relative position, when the first input data and the second input data meet the instruction generation condition, generating an instruction corresponding to the first input data and the second input data;
the instructions are executed.
In this embodiment, the spatial positioning module 402 is specifically configured to measure positions of the holding hand and the non-holding hand in a space, so as to determine first coordinate information of the holding hand and second coordinate information of the non-holding hand according to a measurement result. Here, the first coordinate information is the first input data, and the second coordinate information is the second input data.
The posture sensing module 403 is specifically configured to measure coordinates of the holding hand in a three-dimensional space, and determine a posture of the holding hand in the three-dimensional space according to a measurement result, where posture information corresponding to the posture may be the first input data.
The gesture sensing module 404 is specifically configured to acquire gesture depth data implemented by the non-holding hand, where the gesture depth data includes gesture image data and relative position data between the non-holding hand and the holding hand, and here, the gesture depth data may be the second input data.
The touch sensing module 405 is specifically configured to measure touch data generated when a first device is in contact with a two-dimensional plane when a holding hand performs an operation on a second device on the two-dimensional plane by using the first device, and obtain the touch data. Here, the touch data may be the first input data.
The communication module 401 is specifically configured to generate a corresponding instruction according to detection results of the spatial positioning module 402, the gesture sensing module 403, the gesture sensing module 404, and the touch sensing module 405, and execute the instruction.
It should be noted that: in the data processing apparatus provided in the above embodiment, when implementing gesture interaction with a device having two-dimensional data and three-dimensional data, only the division of the program modules is illustrated, and in practical applications, the processing may be distributed to different program modules according to needs, that is, the internal structure of the data processing apparatus may be divided into different program modules to complete all or part of the processing described above. In addition, the data processing apparatus provided in the above embodiment and the data processing method embodiment belong to the same concept, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
Fig. 5 is a schematic structural diagram of a data processing device in the embodiment of the present invention, and as shown in fig. 5, the data processing device 500 may be a handle, a mouse, a trackball, a mobile phone, a smart pen, a smart watch, a smart ring, a smart bracelet, a smart glove, or the like. The data processing apparatus 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and a user interface 503. The various components in the data processing device 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, a touch screen, or the like, among others.
It will be appreciated that the memory 502 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 502 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 502 in embodiments of the present invention is used to store various types of data to support the operation of the data processing apparatus 500. Examples of such data include: any computer programs for operating on the data processing apparatus 500, such as an operating system 5021 and application programs 5022. The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 5022 may contain various applications such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. The program for implementing the method according to the embodiment of the present invention may be included in the application program 5022.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 5301 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 501 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 502, and the processor 501 reads the information in the memory 502 and performs the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the data processing apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the foregoing methods.
Specifically, when the processor 501 runs the computer program, it executes: acquiring first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body; determining a relative position between the different operation body or the different part and a data acquisition device based on the first input data and the second input data; based on the relative position, when the first input data and the second input data meet the instruction generation condition, generating an instruction corresponding to the first input data and the second input data; the instructions are executed.
When the processor 501 runs the computer program, it further executes: detecting a contact state of the different operation bodies or the different parts with the data acquisition equipment; and acquiring the first input data and the second input data by adopting different types of sensors in the data acquisition equipment when at least one of the different operation bodies or at least one of the different parts is determined to be in contact with the data acquisition equipment based on the contact state.
When the processor 501 runs the computer program, it further executes: establishing a mapping relation of each relative position in a real space and a virtual space based on N relative positions between the different operation bodies or the different parts and the data acquisition equipment; wherein N is greater than or equal to 1; acquiring a first mapping relation of the relative position in the real space and the virtual space based on the relative position; matching the first mapping relation with the mapping relation of each relative position in a mapping library in a real space and a virtual space; and according to a matching result, when the first mapping relation is successfully matched with the mapping relation corresponding to at least one relative position in the mapping library, determining that the first input data and the second input data meet the instruction generation condition.
When the processor 501 runs the computer program, it further executes: generating different instructions corresponding to the first input data and the second input data according to different input forms adopted by different operation bodies or different parts; or generating the first input data and the second input data according to different targets of the different operation bodies or the different parts to generate different corresponding instructions.
When the processor 501 runs the computer program, it further executes: determining an object for which the different operation body or the different part is directed based on the first input data and the second input data; matching the object with a preset object; and when the object is determined to be the preset object according to the matching result, generating an instruction corresponding to the object by using the first input data and the second input data.
In an exemplary embodiment, the present invention further provides a computer readable storage medium, such as a memory 502, comprising a computer program, which is executable by a processor 501 of a data processing apparatus 500 to perform the steps of the aforementioned method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, performs: acquiring first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body; determining a relative position between the different operation body or the different part and a data acquisition device based on the first input data and the second input data; based on the relative position, when the first input data and the second input data meet the instruction generation condition, generating an instruction corresponding to the first input data and the second input data; the instructions are executed.
The computer program, when executed by the processor, further performs: detecting a contact state of the different operation bodies or the different parts with the data acquisition equipment; and acquiring the first input data and the second input data by adopting different types of sensors in the data acquisition equipment when at least one of the different operation bodies or at least one of the different parts is determined to be in contact with the data acquisition equipment based on the contact state.
The computer program, when executed by the processor, further performs: establishing a mapping relation of each relative position in a real space and a virtual space based on N relative positions between the different operation bodies or the different parts and the data acquisition equipment; wherein N is greater than or equal to 1; acquiring a first mapping relation of the relative position in the real space and the virtual space based on the relative position; matching the first mapping relation with the mapping relation of each relative position in a mapping library in a real space and a virtual space; and according to a matching result, when the first mapping relation is successfully matched with the mapping relation corresponding to at least one relative position in the mapping library, determining that the first input data and the second input data meet the instruction generation condition.
The computer program, when executed by the processor, further performs: generating different instructions corresponding to the first input data and the second input data according to different input forms adopted by different operation bodies or different parts; or generating the first input data and the second input data according to different targets of the different operation bodies or the different parts to generate different corresponding instructions.
The computer program, when executed by the processor, further performs: determining an object for which the different operation body or the different part is directed based on the first input data and the second input data; matching the object with a preset object; and when the object is determined to be the preset object according to the matching result, generating an instruction corresponding to the object by using the first input data and the second input data.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method of data processing, the method comprising:
the method comprises the steps that first equipment acquires first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body; the operation body represents a human hand;
determining a relative position in real space between the different operation bodies or the different parts based on the first input data and the second input data; one of the different operation bodies or one of the different portions is in a contact state with the first device;
acquiring the mapping relation of the relative position in the real space and the virtual space from a mapping library based on the relative position to obtain an acquisition result, and when the acquisition result represents that the mapping relation of the relative position in the real space and the virtual space is successfully acquired from the mapping library, determining that the first input data and the second input data meet an instruction generation condition, and generating an instruction corresponding to the first input data and the second input data;
the instructions are executed.
2. The method of claim 1, prior to said obtaining first input data and second input data, further comprising:
detecting a contact state of the different operation body or the different part with the first device;
accordingly, the acquiring the first input data and the second input data comprises:
and acquiring the first input data and the second input data by adopting different types of sensors in the first equipment when at least one of the different operation bodies or at least one of the different parts is determined to be in contact with the first equipment based on the contact state.
3. The method of claim 1, prior to said obtaining first input data and second input data, the method comprising:
establishing a mapping relation of each relative position in a real space and a virtual space based on N relative positions between the different operation bodies or the different parts and the first equipment; wherein N is greater than or equal to 1.
4. The method of claim 1, the generating instructions corresponding to the first input data and the second input data comprising:
generating different instructions corresponding to the first input data and the second input data according to different input forms adopted by different operation bodies or different parts;
or generating the first input data and the second input data according to different targets of the different operation bodies or the different parts to generate different corresponding instructions.
5. The method according to claim 4, wherein the generating the first input data and the second input data according to different objects aimed at by different operators or different parts has different instructions, and comprises:
determining an object for which the different operation body or the different part is directed based on the first input data and the second input data;
matching the object with a preset object;
and when the object is determined to be the preset object according to the matching result, generating an instruction corresponding to the object by using the first input data and the second input data.
6. A data processing apparatus, the apparatus comprising:
an acquisition unit configured to acquire first input data and second input data; wherein the first input data and the second input data are generated by different operators; or generated by different parts of the same operating body; the operation body represents a human hand; the mapping method comprises the steps of obtaining a first mapping relation of relative positions in a real space and a virtual space based on the relative positions in the real space between different operation bodies or different parts;
a determination unit configured to determine a relative position between the different operation bodies or the different portions in a real space based on the first input data and the second input data; one of the different operation bodies or one of the different portions is in a contact state with the data processing apparatus; and further for determining that the first input data and the second input data satisfy an instruction generation condition when it is determined that the first mapping relationship is successfully matched with the mapping relationship of each relative position in a mapping library;
the generating unit is used for generating an instruction corresponding to the first input data and the second input data when the first input data and the second input data are determined to meet an instruction generating condition;
an execution unit to execute the instruction.
7. The apparatus of claim 6, the apparatus further comprising:
a detection unit configured to detect a contact state of the different operation body or the different portion with the data processing apparatus;
the acquisition unit is configured to acquire the first input data and the second input data using different types of sensors in the data processing device when it is determined that at least one of the different operation bodies or at least one of the different portions is in contact with the data processing device based on the contact state.
8. The apparatus of claim 6, the apparatus further comprising:
the establishing unit is used for establishing the mapping relation of the relative positions in the real space and the virtual space based on the N relative positions between the different operation bodies or the different parts and the data processing device; wherein N is greater than or equal to 1;
the matching unit is used for matching the first mapping relation with the mapping relation of each relative position in the mapping library;
the determining unit is specifically configured to determine that the first input data and the second input data satisfy the instruction generating condition when it is determined that the first mapping relationship is successfully matched with the mapping relationship of each relative position in the mapping library according to the matching result.
9. The apparatus according to claim 6, wherein the generating unit is configured to generate the first input data and the second input data in different commands according to different input forms adopted by the different operation bodies or the different parts; or generating the first input data and the second input data according to different targets of the different operation bodies or the different parts to generate different corresponding instructions.
10. A data processing apparatus, the apparatus comprising: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor, when executing the computer program, is adapted to perform the steps of the method of any of claims 1 to 5.
CN201910116185.5A 2019-02-15 2019-02-15 Data processing method and device Active CN109960404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910116185.5A CN109960404B (en) 2019-02-15 2019-02-15 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910116185.5A CN109960404B (en) 2019-02-15 2019-02-15 Data processing method and device

Publications (2)

Publication Number Publication Date
CN109960404A CN109960404A (en) 2019-07-02
CN109960404B true CN109960404B (en) 2020-12-18

Family

ID=67023703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910116185.5A Active CN109960404B (en) 2019-02-15 2019-02-15 Data processing method and device

Country Status (1)

Country Link
CN (1) CN109960404B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111399654B (en) * 2020-03-25 2022-08-12 Oppo广东移动通信有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN112328156B (en) * 2020-11-12 2022-05-17 维沃移动通信有限公司 Input device control method and device and electronic device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007016408A1 (en) * 2007-03-26 2008-10-02 Ident Technology Ag Mobile communication device and input device therefor
KR101651568B1 (en) * 2009-10-27 2016-09-06 삼성전자주식회사 Apparatus and method for three-dimensional space interface
EP3584682B1 (en) * 2010-12-22 2021-06-30 zSpace, Inc. Three-dimensional tracking of a user control device in a volume
KR102170321B1 (en) * 2013-06-17 2020-10-26 삼성전자주식회사 System, method and device to recognize motion using gripped object
KR102334084B1 (en) * 2015-06-16 2021-12-03 삼성전자주식회사 Electronic apparatus and method for controlling thereof
CN107783674A (en) * 2016-08-27 2018-03-09 杨博 A kind of augmented reality exchange method and action induction felt pen
CN107817911B (en) * 2017-09-13 2024-01-30 杨长明 Terminal control method and control equipment thereof
CN108319369B (en) * 2018-02-01 2021-04-27 网易(杭州)网络有限公司 Driving interaction method and device, storage medium and processor
CN108519817A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Exchange method, device, storage medium based on augmented reality and electronic equipment

Also Published As

Publication number Publication date
CN109960404A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
CN109074154B (en) Hovering touch input compensation in augmented and/or virtual reality
CN107918485B (en) Gesture detection system and method
EP3707584B1 (en) Method for tracking hand pose and electronic device thereof
EP3035164A1 (en) Wearable sensor for tracking articulated body-parts
US20110148755A1 (en) User interface apparatus and user interfacing method based on wearable computing environment
US10120444B2 (en) Wearable device
EP3685248B1 (en) Tracking of location and orientation of a virtual controller in a virtual reality system
JP2004280834A (en) Motion recognition system using virtual writing plane, and recognition method thereof
CA2553960A1 (en) Processing pose data derived from the pose of an elongate object
JP2010108500A (en) User interface device for wearable computing environmental base, and method therefor
US11209916B1 (en) Dominant hand usage for an augmented/virtual reality device
CN104254816A (en) A data input device
US11009949B1 (en) Segmented force sensors for wearable devices
CN108027648A (en) The gesture input method and wearable device of a kind of wearable device
CN109960404B (en) Data processing method and device
CN114529691A (en) Window control method, electronic device and computer readable storage medium
Bai et al. Asymmetric Bimanual Interaction for Mobile Virtual Reality.
Lang et al. A multimodal smartwatch-based interaction concept for immersive environments
CN117130518A (en) Control display method, head display device, electronic device and readable storage medium
Oh et al. Anywheretouch: Finger tracking method on arbitrary surface using nailed-mounted imu for mobile hmd
KR101686585B1 (en) A hand motion tracking system for a operating of rotary knob in virtual reality flighting simulator
KR102569857B1 (en) Method of providing practical skill training using hmd hand tracking
JP2021009552A (en) Information processing apparatus, information processing method, and program
KR102322968B1 (en) a short key instruction device using finger gestures and the short key instruction method using thereof
JP2020077069A (en) Feedback generating device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant