CN109960404A - A kind of data processing method and device - Google Patents

A kind of data processing method and device Download PDF

Info

Publication number
CN109960404A
CN109960404A CN201910116185.5A CN201910116185A CN109960404A CN 109960404 A CN109960404 A CN 109960404A CN 201910116185 A CN201910116185 A CN 201910116185A CN 109960404 A CN109960404 A CN 109960404A
Authority
CN
China
Prior art keywords
input data
data
different
relative position
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910116185.5A
Other languages
Chinese (zh)
Other versions
CN109960404B (en
Inventor
张印帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910116185.5A priority Critical patent/CN109960404B/en
Publication of CN109960404A publication Critical patent/CN109960404A/en
Application granted granted Critical
Publication of CN109960404B publication Critical patent/CN109960404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of data processing methods, which comprises obtains the first input data and the second input data;Wherein, first input data and second input data are generated by different operation body;Or it is generated by the different parts of same operating body;Based on first input data and second input data, the relative position between the different operation body or the different parts and data acquisition facility is determined;First input data and the corresponding instruction of second input data are generated when determining that first input data and second input data meet instruction formation condition based on the relative position;Execute described instruction.The present invention further simultaneously discloses a kind of data processing equipment.

Description

A kind of data processing method and device
Technical field
This application involves data processing technique more particularly to a kind of data processing method and devices.
Background technique
With augmented reality (AR, Augmented Reality) technology, virtual reality (VR, Virtual Reality) skill The development of art, more and more terminals (such as mobile phone, intelligent glasses, desktop computer) start to show in 3D on own display screen Hold, the terminal interaction with display 3D content may be implemented by wearing AR equipment or VR equipment in user.
But since the opereating specification of AR equipment or VR equipment in the prior art is fixed, so that user's operation model Enclose very limited, both hands must be placed on the fixed position within the vision of AR equipment or VR equipment by user, and be executed Fixed gesture motion is just able to achieve user and shows the terminal interaction of 3D content, in this way, for some pairs of gesture motions and instruction Unfamiliar user is difficult to complete the interaction between terminal.And when both hands are placed in a fixed pose for a long time by user When, it is easy to so that user is generated feeling of fatigue, influences user experience, reduce user to the satisfaction of AR equipment or VR equipment.
Summary of the invention
In order to achieve the above objectives, the technical solution of the embodiment of the present application is achieved in that
One side according to an embodiment of the present invention provides a kind of data processing method, which comprises
Obtain the first input data and the second input data;Wherein, first input data and the second input number It is generated according to by different operation body;Or it is generated by the different parts of same operating body;
Based on first input data and second input data, the different operation body or the difference portion are determined Relative position between position and data acquisition facility;
Based on the relative position, determine that first input data and second input data meet instruction and generate item When part, first input data and the corresponding instruction of second input data are generated;
Execute described instruction.
In above scheme, before the first input data of the acquisition and the second input data, the method also includes:
Detect the contact condition of the different operation body or the different parts and the data acquisition facility;
Correspondingly, the first input data of the acquisition and the second input data, comprising:
It is determined at least one operating body or the different parts in the different operation body based on the contact condition At least one position when being in contact with the data acquisition facility, using different types of sensing in the data acquisition facility Device obtains first input data and second input data.
In above scheme, before the first input data of the acquisition and the second input data, which comprises
Based on N number of relative position between the different operation body or the different parts and the data acquisition facility, Each relative position is established in real space and the mapping relations in Virtual Space;Wherein, N is more than or equal to 1;
Correspondingly, it is based on the relative position, determines that first input data and second input data satisfaction refer to Enable formation condition, comprising:
Based on the relative position obtain the relative position the real space and in the Virtual Space One mapping relations;
First mapping relations are matched with the mapping relations of each relative position in the mapping library;
According to matching result, the mapping relations of each relative position in first mapping relations and the mapping library are determined When with success, determine that first input data and second input data meet described instruction formation condition.
In above scheme, the generation first input data and the corresponding instruction of second input data, comprising:
The input form used according to the different operation body or the different parts is different, generates the first input number It is different according to instruction corresponding with second input data;
Alternatively, it is different according to the object that the different operation body or the different parts are directed to, generate first input It is different that data and second input data generate corresponding instruction.
In above scheme, the object being directed to according to the different operation body or the different parts is different, generates institute It states the first input data and second input data generates corresponding instruction difference, comprising:
Based on first input data and second input data, the different operation body or the difference portion are determined The targeted object in position;
The object is matched with default object;
According to matching result, when determining that the object is the default object, by first input data and described Two input datas generate the instruction of the corresponding object.
According to another aspect of an embodiment of the present invention, a kind of data processing equipment is provided, described device includes:
Acquiring unit, for obtaining the first input data and the second input data;Wherein, first input data and institute The second input data is stated to be generated by different operation body;Or it is generated by the different parts of same operating body;
Determination unit determines the different operation for being based on first input data and second input data Relative position between body or the different parts and data acquisition facility;
Generation unit determines first input data and second input data for being based on the relative position When meeting instruction formation condition, first input data and the corresponding instruction of second input data are generated;
Execution unit, for executing described instruction.
In above scheme, described device further include:
Detection unit, for detecting the contact of the different operation body or the different parts and the data acquisition facility State;
The acquiring unit, tool is originally for determining that at least one of described different operation body is grasped based on the contact condition When making at least one position in body or the different parts and being in contact with the data acquisition facility, using the data acquisition Different types of sensor obtains first input data and second input data in equipment.
In above scheme, described device further include:
Unit is established, for based between the different operation body or the different parts and the data acquisition facility Each relative position is established in real space and the mapping relations in Virtual Space in N number of relative position;Wherein, N is more than or equal to 1;
The acquiring unit, be also used to obtain based on the relative position relative position the real space and The first mapping relations in the Virtual Space;
Matching unit, for carrying out the mapping relations of each relative position in first mapping relations and the mapping library Matching;
The determination unit is specifically used for being determined in first mapping relations and the mapping library according to matching result When the mapping relations successful match of each relative position, determine described in first input data and second input data satisfaction Instruct formation condition.
In above scheme, described device further include:
The generation unit, the input form that tool is originally used to be used according to the different operation body or the different parts is not Together, it generates first input data and the corresponding instruction of second input data is different;Alternatively, according to the different operation The object that body or the different parts are directed to is different, generates first input data and second input data is generated and corresponded to Instruction it is different.
According to a third aspect of the embodiments of the present invention, a kind of data processing equipment is provided, described device include: memory and Processor;
Wherein, the memory, for storing the computer program that can be run on the processor;
The processor when for running the computer program, executes described in any one of above-mentioned data processing method The step of method.
Data processing method and device provided herein, by obtaining the first input data and the second input data; Wherein, first input data and second input data are generated by different operation body;Or not by same operating body It is generated with position;Based on first input data and second input data, determine the different operation body or it is described not With the relative position between position and data acquisition facility;Based on the relative position, first input data and institute are determined When stating the second input data satisfaction instruction formation condition, generates first input data and second input data is corresponding Instruction;Execute described instruction.In this way, by the relative position between different operation body or different parts and data acquisition facility come Corresponding instruction is generated, the opereating specification that family not only can be used is unrestricted, but also can be significantly reduced traditional technology In given user's bring feeling of fatigue must be interacted in limited fixed space.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the data processing method provided in the embodiment of the present invention;
Fig. 2 is three kinds of interactive mode schematic diagrames based on smart pen Yu the second equipment in the embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram one of the data processing equipment provided in the embodiment of the present invention;
Fig. 4 is the structure composition schematic diagram two of data processing equipment in the embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram two of the data processing equipment provided in the embodiment of the present invention.
Specific embodiment
In order to more fully hereinafter understand the features of the present invention and technology contents, with reference to the accompanying drawing to reality of the invention It is now described in detail, appended attached drawing purposes of discussion only for reference, is not used to limit the present invention.
Fig. 1 is a kind of flow diagram of the data processing method provided in the embodiment of the present invention, as shown in Figure 1, described Method includes:
Step 101, the first input data and the second input data are obtained;Wherein, first input data and described Two input datas are generated by different operation body;Or it is generated by the different parts of same operating body;
In the embodiment of the present invention, this method is mainly used in the first equipment.First equipment can be handle, mouse, rail The handheld devices such as mark ball, mobile phone, smart pen can also be that smartwatch, intelligent ring, Intelligent bracelet, Intelligent glove etc. are worn Wear formula equipment, etc..
The embodiment of the present invention is described in detail so that first equipment is smart pen as an example below:
When user's right hand holds smart pen, which can be by the myoelectric sensor pair that is arranged in the smart pen The skin surface muscle of a certain finger (hereinafter referred to as gripping hand) of the right hand or the right hand is detected, and the gripping hand is obtained Electromyography signal, the electromyography signal that will test are compared with default electromyography signal, obtain comparison result.When comparison result characterizes When the electromyography signal detected is greater than default electromyography signal, determine that the smart pen and gripping hand are currently at contact condition.
It, can be using in the smart pen when smart pen determines that the smart pen and gripping palmistry contact based on the contact condition Different types of sensor obtains the first input data and the second input data generated by the user.
Here, first input data and the second input data are generated by different operation body respectively, can also be by same behaviour Make produced by the different parts of body.For example, the right hand that different operation body is the user respectively (holds hand and left hand is (i.e. non-to hold Hold hand);And the different parts of same operating body are the thumb of the gripping hand and other fingers in addition to thumb respectively.
The second equipment is write and drawn on the two-dimensional surfaces such as desktop or screen using smart pen when holding hand Pen control operation when, the smart pen can by be arranged in the touch sensor at the pen tip of the smart pen perceive the smart pen with The touch data that two-dimensional surface generates when contacting.And the acceleration transducer by being arranged in the smart pen can be to the intelligence The acceleration that pen generates in different directions measures, and obtains acceleration information.Then can use double integral to To acceleration information calculated, obtain triaxial coordinate of the smart pen on the two-dimensional surface, thus by three axis sit Target variation can determine the data such as tilt angle, spinning movement and moving direction of the smart pen on two-dimensional surface.This In, the triaxial coordinate and touch data which generates on the two-dimensional surface may act as so-called first in this method Input data.When non-gripping hand carries out gesture operation to the second equipment, which can be by depth camera to non- The gesture data for holding hand is acquired, and obtains the non-gesture depth information for holding hand, includes in the gesture depth information There are the non-images of gestures information for holding hand and the non-relative position information for holding hand and second equipment.Here, the gesture depth Information can be used as so-called second input data in this method.
When holding hand and carrying out pen control operation to the second equipment in three dimensions using the smart pen, which can be with The acceleration and angular speed that the smart pen is generated in different directions by the attitude transducer that is arranged in the smart pen into Row measurement, and obtain measurement result.By in the measurement result acceleration information and angular velocity data calculate, obtain The posture information of the smart pen in three dimensions.Here, which can be used as first input data.When non-gripping When hand carries out gesture operation to the second equipment in three dimensions, which can be by the depth that is arranged in the smart pen Camera is acquired the non-gesture for holding hand, obtains the non-gesture depth information for holding hand, wherein gesture depth letter It include the non-images of gestures information for holding hand and the non-relative position information for holding hand and the smart pen in breath.Here, the hand Gesture depth information can be used as so-called second input data in this method.
Multiple microphones are also provided in second equipment, and the position of each microphone is different.When the second equipment When issuing ultrasonic signal, which can also be received by the ultrasonic sensor being arranged in the smart pen by each wheat The ultrasonic signal that gram wind is simultaneously emitted by.Since the position of each microphone is different, so that the smart pen passes through supersonic sensing Have phase difference and/or signal reception time poor between each ultrasonic signal that device receives.Then, according to the phase difference And/or signal reception time difference determines co-ordinate position information of the smart pen relative to the second equipment.Here, the coordinate bit confidence Breath can be used as so-called first input data in this method.
Here, the second equipment can be the equipment such as tablet computer, desktop computer, smart phone with touch screen.
Step 102, it is based on first input data and second input data, determines the different operation body or institute State the relative position between different parts and data acquisition facility;
Here, due to that can determine the coordinate information of smart pen according to first input data, when the smart pen is logical Cross the depth camera being arranged in the smart pen acquire it is non-hold hand gesture when, can be in the coordinate information institute of the smart pen A pre-designed pattern is projected as reference picture (encoded light source) to non-gripping hand at the position of expression, and will be referred to Then the project structured light of image, then receives the non-structured light patterns for holding watch face reflection, due to receiving to non-gripping hand To structured light patterns can deform because of the non-three-dimensional shape for holding watch face, therefore, can be by the structure that receives Position of the light pattern in the depth camera and deformation degree determine the coordinate information of non-gripping hand in three dimensions.So Afterwards, the coordinate information based on the smart pen and it is non-hold hand coordinate information can determine non-grippings hand and gripping hand between Relative position in real space.
Step 103, it is based on the relative position, determines that first input data and second input data satisfaction refer to When enabling formation condition, first input data and the corresponding instruction of second input data are generated;
In the embodiment of the present invention, when smart pen is according to the coordinate information of the smart pen and the non-coordinate information for holding hand, really After the fixed non-relative position held between hand and gripping hand in real space, each relative position is also based on true empty Between and the mapping relations in Virtual Space, from obtained in mapping library the non-gripping hand and hold hand between in real space Virtual relative position information of the relative position in Virtual Space.Then, it is reflected based on each virtual relative position information with what is instructed It penetrates relationship and obtains the corresponding instruction of the virtual relative position information from instruction database, and obtain obtaining result.When acquisition result table When sign instruction obtains failure, characterizes current first input data and second input data is unsatisfactory for instruction formation condition, then Any instruction is not generated;If acquisition result characterization instruction is obtained successfully, first input data and the second input are characterized Data meet instruction formation condition, then generate first input data and the corresponding instruction of the second input data.
Here, in the mapping library each relative position real space and mapping relations in Virtual Space specifically can be with base The N number of real space and virtual sky generated for the first time between different operation body or the different parts and the smart pen of same operating body Between relative position established, wherein N be more than or equal to 1.
It is opposite to generate this by holding hand and the non-relative position held between hand in real space for the embodiment of the present invention Position corresponding instruction in Virtual Space, one side user, which does not have to the fixed gesture of deliberately memory, can realize and the second equipment Interactive process;On the other hand, the relative position and the second equipment held between hand and non-gripping hand in real space is utilized It interacts, free space interaction truly may be implemented, greatly reducing must be limited in traditional technology Given user's bring feeling of fatigue is interacted in fixed space.
In the embodiment of the present invention, the smart pen is in the second input data for receiving non-gripping hand input gesture and generating When, due to including gesture data and non-gripping hand in second input data and holding the station-keeping data between hand, because This, can also extract the gesture data in the second input data, the gesture that then will be prestored in the gesture data and gesture library Data are matched, and obtain matching result.When it fails to match for matching result characterization gesture, then second input data is characterized It is unsatisfactory for instruction formation condition, then does not generate any instruction;When matching result characterize gesture successful match when, then characterize this second Input data meets instruction formation condition and obtains the hand from instruction database then based on the mapping relations between each gesture and instruction The corresponding instruction of gesture.
In the embodiment of the present invention, smart pen is generating first input data and the corresponding finger of second input data , can also be different with the non-input form for holding hand getting according to hand is held when enabling, generate different instructions.
Here, input form includes two-dimensional surface input form, both hands in three-dimensional space input form and three-dimensional space Cooperate input form etc..
For example, the smart pen passes through touch when gripping hand carries out pen control operation for the second equipment on two-dimensional surface Sensor is able to detect that the smart pen contacts generated touch data with two-dimensional surface.To can be with according to the touch data It determines that current input form is two-dimensional surface input form, then generates the finger interacted with 2-D data in the second equipment It enables.Such as the instruction can be for the writing instruction of 2-D data input or drawing for order etc. in the second equipment.
When gripping hand is directed to the second equipment in three dimensions carries out pen control operation, which passes through attitude transducer It can detecte the attitude data of the smart pen in three dimensions, to can determine current input according to the attitude data Form is three-dimensional space input form, then generates the instruction interacted with three-dimensional data in the second equipment.For example, the instruction can To be model fractionation instruction, Model Mounting instruction or model rotation instruction etc. for threedimensional model input in the second equipment.
While the smart pen detects the attitude data of the smart pen in three dimensions by attitude transducer, lead to The depth camera crossed in the smart pen collects gesture depth caused by the gesture that non-gripping hand inputs in three dimensions When data, according to the non-gripping hand carried in the gesture depth data and the relative position information between hand can be held, determined Current input form is three-dimensional space both hands cooperation input form.Then generate the finger for controlling the second equipment simultaneously with both hands cooperation It enables.For example, the instruction can be for the volume amplification instruction of onboard system or stopping play instruction etc..
In the embodiment of the present invention, which can also be different with the non-object for holding hand acupuncture pair according to hand is held, and generates Different instructions.
Here, it holds hand and the non-object for holding hand acupuncture pair specifically can be third equipment, such as the third equipment can be with It is the information input equipments such as keyboard, board.
When non-gripping hand implements typewriting operation or slide on keyboard, the depth camera in the smart pen can The non-gesture depth information for holding hand is collected, includes keyboard image and images of gestures in the gesture depth information.Then lead to It crosses and matches the images of gestures prestored in the images of gestures and gesture library, obtain matching result.It is determined according to matching result When images of gestures successful match, which is matched with equipment image preset in equipment library, obtains matching result. When matching result characterizes keyboard image successful match, then according to the images of gestures, the mapping relations of the keyboard image and instruction, Obtain instruction corresponding with the images of gestures, keyboard image.Refer to for example, the instruction can be the text deletion inputted for keyboard It enables, pull instruction or increment scroll bar down instruction etc. on scroll bar.
Step 104, described instruction is executed.
In the embodiment of the present invention, when different parts of the smart pen based on different operation body or same operating body and the smart pen Between relative position this can be referred to after generating first input data and the corresponding instruction of second input data Order is sent to some corresponding application in the second electronic equipment, to start the application by the instruction;Or the application in starting The middle processing operation executed to some object.
In the embodiment of the present invention, corresponding instruction is generated by relative position information of user's both hands in real space, User can be made when interacting with the second equipment, do not realized by the constraint of fixed space and carried out in real free space Interactive purpose often carries out feeling of fatigue caused by gesture input in a fixed space so as to reduce user.In addition, by Gesture in gesture library be all based on user's both hands interacted for the first time with the second equipment caused by any one meet nature friendship Mutual gesture belongs to user with any interaction gesture for feeling to associate, therefore, can greatly reduce user to fixed gesture Cognitive load, to increase the naturality interacted between user and equipment.
For example, the gesture for meeting natural interaction may include, both hands crawl, finger are pinched, are hold by one hand, both hands are mobile, The gestures such as both hands rotation.
In the embodiment of the present invention, which can also be different according to the types of data, switching currently with the second equipment Interactive mode.Here, the type of data includes three-dimensional data and 2-D data.
For example, using two-dimentional interactive mode between current smart pen and the second equipment, when the smart pen is according to current Receive user input the first input data when, can by the type of first input data with it is nearest apart from current time History type be compared to obtain comparison result, the type and history type of first input data are determined according to comparison result When different, it is directly switch to the corresponding interactive mode of type of first input data.For example, the type of first input data It is three-dimensional data type, history type is 2-D data type.Then when determining two kinds of type difference, directly by two dimension The corresponding two-dimentional interactive mode of data type is switched to the corresponding three-dimension interaction mode of three-dimensional data type.In this way, due to the intelligence Can pen can be directly smooth the interactive mode that three-dimensional space is transformed by the interactive mode of two-dimensional space, so as to more convenient User carries out various forms of gesture inputs, so that the experience of user is more preferably.
Fig. 2 is three kinds of interactive mode schematic diagrames based on smart pen Yu the second equipment in the embodiment of the present invention, such as Fig. 2 institute Show, including the first equipment (smart pen) 201 and the second equipment 202, wherein the first equipment 201 is in contact shape with hand 203 is held State, non-gripping hand 204 are used to carry out gesture operation to the second equipment.Wherein, the interactive mode that straight line symbol indicates is to hold hand 203 and it is non-hold hand 204 on two-dimensional surface to the second equipment 202 implement two-dimentional interactive form.
When the first equipment 201 is smart pen, holding hand 203 can be on two-dimensional surface to the using the first equipment 201 Two equipment 202 implement pen control operation.The second equipment 202 is implemented on two-dimensional surface using the first equipment 201 when holding hand 203 When pen control operation, the touch sensor being arranged at 201 pen tip of the first equipment is able to detect that first equipment 201 and two dimension are flat The touch data generated when face contact.And by the acceleration transducer that is arranged in first equipment 201 can to this first The acceleration that equipment 201 generates in different directions measures, and obtains acceleration information.Then it can use double integral Obtained acceleration information is calculated, the triaxial coordinate of the first equipment 201 on the two-dimensional surface is obtained, to pass through The variation of the triaxial coordinate can determine tilt angle, spinning movement and movement of first equipment 201 on two-dimensional surface The data such as direction.And when non-gripping hand 204 implements gesture operation to the second equipment 202, which can pass through depth Degree camera is acquired the non-gesture data for holding hand 204, and obtains the non-gesture depth information for holding hand 204, should It include the non-images of gestures information for holding hand 204 and the non-phase for holding hand 204 and second equipment 202 in gesture depth information To location information.It so, it is possible to realize that the two dimension between user's both hands and the second equipment 202 interacts.
The interactive mode that dashed signs indicate is to hold hand 203 and non-gripping hand 204 in three dimensions to the second equipment The 202 three-dimension interaction forms implemented.It, should when gripping hand 203 in three dimensions operates the pen control that the second equipment 202 is implemented First equipment 201 can by the attitude transducer that is arranged in first equipment 201 to first equipment 201 in different directions The acceleration and angular speed of upper generation measures, and obtains measurement result.By to the acceleration information in the measurement result It is calculated with angular velocity data, obtains the posture information of the first equipment 201 in three dimensions.When non-gripping hand 204 exists When carrying out gesture operation to the second equipment 202 in three-dimensional space, which can be by being arranged in first equipment Depth camera in 201 is acquired the non-gesture for holding hand 204, obtains the non-gesture depth information for holding hand 204, It wherein, include the non-images of gestures information for holding hand 204 and non-gripping hand 204 and first equipment in the gesture depth information 201 relative position information.It so, it is possible to realize that user's both hands and the second equipment carry out three-dimension interaction.
The interactive mode that two-wire symbol indicates is to hold hand 203 and non-gripping hand 204 to cooperate in three-dimensional space to second The three-dimension gesture interactive form that equipment 202 is implemented.It is set in three-dimensional space cooperation to second when holding hand 203 and non-gripping hand 204 When standby 202 implementation gesture operation, the first equipment 201 can be detected by the attitude transducer being arranged in first equipment 201 The posture information of first equipment 201, the posture information so as to detect can determine the coordinate letter of the first equipment 201 Breath, and the non-hand for holding hand 204 is acquired when first equipment 201 passes through the depth camera being arranged in first equipment 201 It, can be based on the coordinate information of first equipment 201, to non-gripping hand 204 at the position corresponding to the coordinate information when gesture One pre-designed pattern of projection is held as reference picture (encoded light source), and by the project structured light of reference picture to non- Hand 204 is held, then, then the non-structured light patterns for holding 204 surface reflection of hand are received, due to the structured light patterns meeting received It deforms because of the non-three-dimensional shape for holding 204 surface of hand, it therefore, can be by the structured light patterns that receive in the depth Position and deformation degree on camera determine the coordinate information of non-gripping hand 204 in three dimensions.Then, based on this The coordinate information of one equipment 201 and the non-coordinate information for holding hand 204 can determine non-grippings hand 204 and gripping hand 203 it Between relative position in real space.Based on the relative position in real space and the mapping relations in Virtual Space, from The non-relative position held between hand 204 and gripping hand 203 in real space is obtained in mapping library in Virtual Space Virtual relative position information.Then, the mapping relations based on each virtual relative position information and instruction are obtained from instruction database should The corresponding instruction of virtual relative position information, and execute the instruction.In this way, may be implemented to cooperate using both hands in three dimensions Spatial interaction is carried out with the second equipment.
Fig. 3 is the structural schematic diagram one of data processing equipment in the embodiment of the present invention, as shown in figure 3, the device includes:
Data capture unit 301, for obtaining the first input data and the second input data;Wherein, first input Data and second input data are generated by different operation body;Or it is generated by the different parts of same operating body;
Determination unit 302 determines the different behaviour for being based on first input data and second input data Make the relative position between body or the different parts and data acquisition facility;
Instruction generation unit 303 determines first input data and described second defeated for being based on the relative position When entering data satisfaction instruction formation condition, first input data and the corresponding instruction of second input data are generated;
Execution unit 304, for executing described instruction.
In the embodiment of the present invention, the device further include:
Detection unit 305, for detecting the different operation body or the different parts and the data acquisition facility Contact condition;
The data capture unit 301, specifically for being determined in the different operation body extremely based on the contact condition When at least one position in a few operating body or the different parts is in contact with the data acquisition facility, using described Different types of sensor obtains first input data and second input data in data acquisition facility.
In the embodiment of the present invention, described device further include:
Establish unit 306, for based on the different operation body or the different parts and the data acquisition facility it Between N number of relative position, establish each relative position in real space and the mapping relations in Virtual Space;Wherein, N be greater than etc. In 1;
The data capture unit 301 is also used to obtain the relative position based on the relative position described true Space and the first mapping relations in the Virtual Space;
Matching unit 307, for will in first mapping relations and mapping library each relative position real space and The mapping relations in Virtual Space are matched;
Determination unit 302 is also used to be determined in first mapping relations and the mapping library at least according to matching result When the corresponding mapping relations successful match in one relative position, determine that first input data and second input data are full Sufficient described instruction formation condition.
Described instruction generation unit 303, it is defeated specifically for being used according to the different operation body or the different parts Enter form difference, generates first input data and the corresponding instruction of second input data is different;Alternatively, according to described The object that different operation body or the different parts are directed to is different, generates first input data and second input data It is different to generate corresponding instruction.
Wherein, the object being directed in described instruction generation unit 303 according to the different operation body or the different parts Difference can specifically pass through when generating first input data and the different corresponding instruction of second input data generation Determination unit 302 be based on first input data and second input data, determine the different operation body or it is described not With the targeted object in position;Then the object is matched with default object by matching unit 307;Instruction generation unit 303 according to matching result, when determining that the object is the default object, by first input data and second input Data generate the instruction of the corresponding object.
The embodiment of the present invention, the device can be the handheld devices such as handle, mouse, trace ball, mobile phone, smart pen, also It can be wearable devices such as smartwatch, intelligent ring, Intelligent bracelet, Intelligent glove etc..By using hand-held or wearing The electronic equipment of formula is as sensing equipment, and between the different parts and the device based on different operation body or same operating body Relative tertiary location relationship generates corresponding instruction, imparts mobility for sensing equipment.And can directly using hand with The gesture of relative tertiary location in the device between sensor as input data, can truly realize both hands with Free space interaction between electronic equipment, greatly reduces and uses fixed space interaction in traditional technology in VR equipment Mode, and the feeling of fatigue for generating user.
Here, scaling, the rotation that both hands are implemented at an arbitrary position can be using the gesture interaction of relative tertiary location relationship Turn, translation etc. gestures.
Since user is when implementing zoom operations to target object, it is difficult to perceive by the direct observation to three-dimension object Therefore the size of target object when user implements grasping manipulation to target object using the prior art, often will appear user Not the case where size and distance thought and the size and distance of actual object are not inconsistent, fail to grip with target object so as to cause user. And in the embodiment of the present invention, electronic equipment is held or dressed by user, by holding the hand of the electronic equipment and user is non-holds " relative tertiary location " between the hand of the electronic equipment is held to execute corresponding instruction, since such interactive form is will to use Family body itself inputs dimension as a kind of space, and such user can be used any gesture and realize that two dimension is interactive and three-dimensional Interaction, so as to allow user to grab object with the scales of any both hands.It is recognized to simplify user to three-dimensional space Difficulty, also reduce the load that user needs to remember fixed gesture.
In the embodiment of the present invention, since the electronic equipment that user holds or dresses can generate position with the movement of operating body Move, thus user for target object interactive space position also with being subjected to displacement, therefore, in the embodiment of the present invention, user Changes in coordinates between both hands and three-dimensional space does not need the cognitive load for individually occupying user.But it is grabbed for the first time based on user The changes in coordinates data that object is established, allow user by way of independently establishing coordinate, form user's both hands in three-dimensional space In mapping relations, determine the gesture data that user's both hands generate in three dimensions by the mapping relations, thus generate pair The gesture instruction answered.
It should be understood that data processing equipment provided by the above embodiment is being realized and is having 2-D data and three dimensions According to equipment carry out gesture interaction when, only with the division progress of above-mentioned each program module for example, in practical application, Ke Yigen Above-mentioned processing is distributed according to needs and is completed by different program modules, i.e., is divided into the internal structure of data processing equipment not Same program module, to complete all or part of processing described above.In addition, data processing dress provided by the above embodiment It sets and belongs to same design with above-mentioned data processing method embodiment, specific implementation process is detailed in embodiment of the method, here no longer It repeats.
Fig. 4 is the structure composition schematic diagram two of data processing equipment, as shown in figure 4, the device packet in the embodiment of the present invention It includes: communication module 401, space orientation module 402, posture sensing module 403, gesture sensing module 404 and touch sensing module 405。
Wherein, space orientation module 402, posture sensing module 403, gesture sensing module 404 and touch sensing module 405, for obtaining the first input data and the second input data;Wherein, first input data and the second input number It is generated according to by different operation body;Or it is generated by the different parts of same operating body;
Communication module 401 determines the different behaviour for being based on first input data and second input data Make the relative position between body or the different parts and data acquisition facility;
Based on the relative position, determine that first input data and second input data meet instruction and generate item When part, first input data and the corresponding instruction of second input data are generated;
Execute described instruction.
In this hair embodiment, space orientation module 402, specifically for the position to gripping hand and non-gripping hand in space It measures, to determine the first coordinate information for holding hand and non-the second coordinate information for holding hand according to measurement result.This In, the first coordinate information is first input data, and the second coordinate information is the second input data.
Posture sensing module 403 is tied specifically for measuring to the coordinate of gripping hand in three dimensions according to measurement Fruit, which can determine, holds the posture that is presented in three dimensions of hand, here posture information corresponding to the posture can for this One input data.
Gesture sensing module 404, specifically for acquiring the non-gesture depth data for holding hand and implementing, the gesture depth data In include images of gestures data and it is non-hold hand and hold hand between station-keeping data, here, the gesture depth data It can be second input data.
It is real for the second equipment on two-dimensional surface using the first equipment to be specifically used for gripping hand for touch sensing module 405 When applying operation, to the first equipment touch data that the touch data that generates is measured, and arrived when two-dimensional surface contacts.This In, which can be first input data.
The communication module 401 is specifically used for being perceived according to space orientation module 402, posture sensing module 403, gesture The testing result of module 404 and touch sensing module 405 generates corresponding instruction, and executes the instruction.
It should be understood that data processing equipment provided by the above embodiment is being realized and is having 2-D data and three dimensions According to equipment carry out gesture interaction when, only with the division progress of above-mentioned each program module for example, in practical application, Ke Yigen Above-mentioned processing is distributed according to needs and is completed by different program modules, i.e., is divided into the internal structure of data processing equipment not Same program module, to complete all or part of processing described above.In addition, data processing dress provided by the above embodiment It sets and belongs to same design with above-mentioned data processing method embodiment, specific implementation process is detailed in embodiment of the method, here no longer It repeats.
Fig. 5 is the structural schematic diagram three of data processing equipment in the embodiment of the present invention, as shown in figure 5, data processing equipment 500 can be handle, mouse, trace ball, mobile phone, smart pen, smartwatch, intelligent ring, Intelligent bracelet, Intelligent glove etc.. Data processing equipment 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504 With user interface 503.Various components in data processing equipment 500 are coupled by bus system 505.It is understood that total Linear system system 505 is for realizing the connection communication between these components.Bus system 505 except include data/address bus in addition to, further include Power bus, control bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as in Fig. 5 Bus system 505.
Wherein, user interface 503 may include display, keyboard, mouse, trace ball, click wheel, key, button, sense of touch Plate or touch screen etc..
It is appreciated that memory 502 can be volatile memory or nonvolatile memory, may also comprise volatibility and Both nonvolatile memories.Wherein, nonvolatile memory can be read-only memory (ROM, Read Only Memory), Programmable read only memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read Only Memory EPROM (EPROM, Erasable Programmable Read-Only Memory), electrically erasable programmable read-only memory The storage of (EEPROM, Electrically Erasable Programmable Read-Only Memory), magnetic random access Device (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface are deposited Reservoir, CD or CD-ROM (CD-ROM, Compact Disc Read-Only Memory);Magnetic surface storage can be Magnetic disk storage or magnetic tape storage.Volatile memory can be random access memory (RAM, Random Access Memory), it is used as External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as Static random access memory (SRAM, Static Random Access Memory), synchronous static random access memory (SSRAM, Synchronous Static Random Access Memory), dynamic random access memory (DRAM, Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM, Synchronous Dynamic Random Access Memory), double data speed synchronous dynamic RAM (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random Access memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronized links Dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct rambus Random access memory (DRRAM, Direct Rambus Random Access Memory).Description of the embodiment of the present invention is deposited Reservoir 502 is intended to include but is not limited to the memory of these and any other suitable type.
Memory 502 in the embodiment of the present invention is for storing various types of data to support data processing equipment 500 Operation.The example of these data includes: any computer program for operating on data processing equipment 500, is such as operated System 5021 and application program 5022.Wherein, operating system 5021 include various system programs, such as ccf layer, core library layer, Layer etc. is driven, for realizing various basic businesses and the hardware based task of processing.Application program 5022 may include various Application program, such as media player (Media Player), browser (Browser) etc., for realizing various applied business. Realize that the program of present invention method may be embodied in application program 5022.
The method that the embodiments of the present invention disclose can be applied in processor 501, or be realized by processor 501. Processor 501 may be a kind of IC chip, the processing capacity with signal.During realization, the above method it is each Step can be completed by the integrated logic circuit of the hardware in processor 501 or the instruction of software form.Above-mentioned processing Device 5301 can be general processor, digital signal processor (DSP, Digital Signal Processor) or other Programmable logic device, discrete gate or transistor logic, discrete hardware components etc..Processor 501 may be implemented or Execute disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor can be microprocessor or Any conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly in hardware decoding Processor executes completion, or in decoding processor hardware and software module combination execute completion.Software module can position In storage medium, which is located at memory 502, and processor 501 reads the information in memory 502, hard in conjunction with it Part completes the step of preceding method.
In the exemplary embodiment, data processing equipment 500 can be by one or more application specific integrated circuit (ASIC, Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable Logic Device), Complex Programmable Logic Devices (CPLD, Complex Programmable Logic Device), field programmable gate array (FPGA, Field-Programmable Gate Array), general processor, control Device, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor) or other electronics member Part is realized, for executing preceding method.
It when the specific processor 501 runs the computer program, executes: obtaining the first input data and the second input Data;Wherein, first input data and second input data are generated by different operation body;Or by same operating body Different parts generate;Based on first input data and second input data, the different operation body or institute are determined State the relative position between different parts and data acquisition facility;Based on the relative position, first input data is determined When meeting instruction formation condition with second input data, first input data and second input data pair are generated The instruction answered;Execute described instruction.
When the processor 501 runs the computer program, also executes: detecting the different operation body or the difference The contact condition at position and the data acquisition facility;At least one in the different operation body is determined based on the contact condition When at least one position in a operating body or the different parts is in contact with the data acquisition facility, using the data It obtains different types of sensor in equipment and obtains first input data and second input data.
It when the processor 501 runs the computer program, also executes: being based on the different operation body or the difference Each relative position is established in real space and in Virtual Space in N number of relative position between position and the data acquisition facility In mapping relations;Wherein, N is more than or equal to 1;Based on the relative position obtain the relative position in the real space and The first mapping relations in the Virtual Space;By each relative position in first mapping relations and mapping library true empty Between and the mapping relations in Virtual Space matched;According to matching result, first mapping relations and institute are determined When stating the corresponding mapping relations successful match at least one relative position in mapping library, first input data and described is determined Second input data meets described instruction formation condition.
When the processor 501 runs the computer program, also execute: according to the different operation body or the difference The input form that position uses is different, generates first input data and the corresponding instruction of second input data is different; Alternatively, it is different according to the object that the different operation body or the different parts are directed to, generate first input data and institute It states the second input data and generates corresponding instruction difference.
It when the processor 501 runs the computer program, also executes: based on first input data and described the Two input datas determine the different operation body or the targeted object of the different parts;By the object and default object It is matched;According to matching result, when determining that the object is the default object, by first input data and described Two input datas generate the instruction of the corresponding object.
In the exemplary embodiment, the embodiment of the invention also provides a kind of computer readable storage medium, for example including The memory 502 of computer program, above-mentioned computer program can be executed by the processor 501 of data processing equipment 500, to complete Step described in preceding method.Computer readable storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash The memories such as Memory, magnetic surface storage, CD or CD-ROM;It is also possible to include one of above-mentioned memory or any group The various equipment closed, such as mobile phone, computer, tablet device, personal digital assistant.
A kind of computer readable storage medium, is stored thereon with computer program, which is run by processor When, it executes: obtaining the first input data and the second input data;Wherein, first input data and the second input number It is generated according to by different operation body;Or it is generated by the different parts of same operating body;Based on first input data and described Second input data determines the relative position between the different operation body or the different parts and data acquisition facility;Base In the relative position, when determining that first input data and second input data meet instruction formation condition, generate First input data and the corresponding instruction of second input data;Execute described instruction.
When the computer program is run by processor, also execute: detect the different operation body or the different parts with The contact condition of the data acquisition facility;Determine that at least one of described different operation body operates based on the contact condition When at least one position in body or the different parts is in contact with the data acquisition facility, set using the data acquisition Different types of sensor obtains first input data and second input data in standby.
When the computer program is run by processor, also execute: based on the different operation body or the different parts with Each relative position is established in real space and reflecting in Virtual Space in N number of relative position between the data acquisition facility Penetrate relationship;Wherein, N is more than or equal to 1;The relative position is obtained in the real space and described based on the relative position The first mapping relations in Virtual Space;By each relative position in first mapping relations and mapping library real space and The mapping relations in Virtual Space are matched;According to matching result, first mapping relations and the mapping are determined In library when the corresponding mapping relations successful match at least one relative position, first input data and described second defeated is determined Enter data and meets described instruction formation condition.
When the computer program is run by processor, also executes: being adopted according to the different operation body or the different parts Input form is different, generates first input data and the corresponding instruction of second input data is different;Alternatively, root The object being directed to according to the different operation body or the different parts is different, generates first input data and described second defeated Enter data and generates corresponding instruction difference.
It when the computer program is run by processor, also executes: based on first input data and second input Data determine the different operation body or the targeted object of the different parts;By the object and the progress of default object Match;According to matching result, when determining that the object is the default object, by first input data and second input Data generate the instruction of the corresponding object.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (10)

1. a kind of data processing method, which comprises
Obtain the first input data and the second input data;Wherein, first input data and second input data by Different operation body generates;Or it is generated by the different parts of same operating body;
Based on first input data and second input data, determine the different operation body or the different parts with Relative position between data acquisition facility;
Based on the relative position, determine that first input data and second input data meet instruction formation condition When, generate first input data and the corresponding instruction of second input data;
Execute described instruction.
2. according to the method described in claim 1, before the first input data of the acquisition and the second input data, the side Method further include:
Detect the contact condition of the different operation body or the different parts and the data acquisition facility;
Correspondingly, the first input data of the acquisition and the second input data, comprising:
It is determined at least one operating body or the different parts in the different operation body extremely based on the contact condition When a few position is in contact with the data acquisition facility, obtained using different types of sensor in the data acquisition facility Take first input data and second input data.
3. according to the method described in claim 1, before the first input data of the acquisition and the second input data, the side Method includes:
Based on N number of relative position between the different operation body or the different parts and the data acquisition facility, establish Each relative position is in real space and the mapping relations in Virtual Space;Wherein, N is more than or equal to 1;
Correspondingly, it is based on the relative position, determines that first input data and second input data meet instruction life At condition, comprising:
The relative position is obtained from mapping library in the real space and in the Virtual Space based on the relative position In mapping relations, obtain obtain result;
The relative position is successfully got from mapping library in the real space and described virtual when obtaining result characterization When mapping relations in space, determine that first input data and second input data meet described instruction and generate item Part.
4. according to the method described in claim 1, the generation first input data and second input data are corresponding Instruction, comprising:
The input form used according to the different operation body or the different parts is different, generate first input data and The corresponding instruction of second input data is different;
Alternatively, it is different according to the object that the different operation body or the different parts are directed to, generate first input data It is different that corresponding instruction is generated with second input data.
5. according to the method described in claim 4, the object being directed to according to the different operation body or the different parts Difference, generates first input data and second input data generates corresponding instruction difference, comprising:
Based on first input data and second input data, the different operation body or different parts institute are determined For object;
The object is matched with default object;
According to matching result, when determining that the object is the default object, by first input data and described second defeated Enter the instruction that data generate the corresponding object.
6. a kind of data processing equipment, described device include:
Acquiring unit, for obtaining the first input data and the second input data;Wherein, first input data and described Two input datas are generated by different operation body;Or it is generated by the different parts of same operating body;
Determination unit, for be based on first input data and second input data, determine the different operation body or Relative position between the different parts and data acquisition facility;
Generation unit determines that first input data and second input data meet for being based on the relative position When instructing formation condition, first input data and the corresponding instruction of second input data are generated;
Execution unit, for executing described instruction.
7. device according to claim 6, described device further include:
Detection unit, for detecting the contact shape of the different operation body or the different parts with the data acquisition facility State;
The acquiring unit, tool is originally for determining at least one operating body in the different operation body based on the contact condition Or at least one position in the different parts is with the data acquisition facility when being in contact, using the data acquisition facility In different types of sensor obtain first input data and second input data.
8. device according to claim 6, described device further include:
Unit is established, for based on N number of between the different operation body or the different parts and the data acquisition facility Each relative position is established in real space and the mapping relations in Virtual Space in relative position;Wherein, N is more than or equal to 1;
The acquiring unit is also used to obtain the relative position in the real space and described based on the relative position The first mapping relations in Virtual Space;
Matching unit, for carrying out the mapping relations of each relative position in first mapping relations and the mapping library Match;
The determination unit is specifically used for determining first mapping relations and each phase in the mapping library according to matching result When to the mapping relations successful match of position, determine that first input data and second input data meet described instruction Formation condition.
9. device according to claim 6, described device further include:
The generation unit, this is used for the input form difference according to the different operation body or different parts use to tool, It generates first input data and the corresponding instruction of second input data is different;Alternatively, according to the different operation body Or the object that the different parts are directed to is different, generates first input data and second input data generation is corresponding Instruction is different.
10. a kind of data processing equipment, described device includes: memory and processor;
Wherein, the memory, for storing the computer program that can be run on the processor;
The processor, when for running the computer program, the step of perform claim requires any one of 1 to 5 the method.
CN201910116185.5A 2019-02-15 2019-02-15 Data processing method and device Active CN109960404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910116185.5A CN109960404B (en) 2019-02-15 2019-02-15 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910116185.5A CN109960404B (en) 2019-02-15 2019-02-15 Data processing method and device

Publications (2)

Publication Number Publication Date
CN109960404A true CN109960404A (en) 2019-07-02
CN109960404B CN109960404B (en) 2020-12-18

Family

ID=67023703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910116185.5A Active CN109960404B (en) 2019-02-15 2019-02-15 Data processing method and device

Country Status (1)

Country Link
CN (1) CN109960404B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111399654A (en) * 2020-03-25 2020-07-10 Oppo广东移动通信有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN112328156A (en) * 2020-11-12 2021-02-05 维沃移动通信有限公司 Input device control method and device and electronic device
CN115494971A (en) * 2022-10-19 2022-12-20 未来创建(深圳)科技有限公司 Electronic touch pen and input system thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101772750A (en) * 2007-03-26 2010-07-07 艾登特技术股份公司 Mobile communication device and input device for the same
US20110096072A1 (en) * 2009-10-27 2011-04-28 Samsung Electronics Co., Ltd. Three-dimensional space interface apparatus and method
CN103443746A (en) * 2010-12-22 2013-12-11 Z空间股份有限公司 Three-dimensional tracking of a user control device in a volume
CN104238736A (en) * 2013-06-17 2014-12-24 三星电子株式会社 Device, method, and system to recognize motion using gripped object
CN107783674A (en) * 2016-08-27 2018-03-09 杨博 A kind of augmented reality exchange method and action induction felt pen
CN107817911A (en) * 2017-09-13 2018-03-20 杨长明 A kind of terminal control method and its control device
CN107820588A (en) * 2015-06-16 2018-03-20 三星电子株式会社 Electronic equipment and its control method including belt
CN108319369A (en) * 2018-02-01 2018-07-24 网易(杭州)网络有限公司 Drive exchange method, device, storage medium and processor
CN108519817A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Exchange method, device, storage medium based on augmented reality and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101772750A (en) * 2007-03-26 2010-07-07 艾登特技术股份公司 Mobile communication device and input device for the same
US20110096072A1 (en) * 2009-10-27 2011-04-28 Samsung Electronics Co., Ltd. Three-dimensional space interface apparatus and method
CN103443746A (en) * 2010-12-22 2013-12-11 Z空间股份有限公司 Three-dimensional tracking of a user control device in a volume
CN104238736A (en) * 2013-06-17 2014-12-24 三星电子株式会社 Device, method, and system to recognize motion using gripped object
CN107820588A (en) * 2015-06-16 2018-03-20 三星电子株式会社 Electronic equipment and its control method including belt
CN107783674A (en) * 2016-08-27 2018-03-09 杨博 A kind of augmented reality exchange method and action induction felt pen
CN107817911A (en) * 2017-09-13 2018-03-20 杨长明 A kind of terminal control method and its control device
CN108319369A (en) * 2018-02-01 2018-07-24 网易(杭州)网络有限公司 Drive exchange method, device, storage medium and processor
CN108519817A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Exchange method, device, storage medium based on augmented reality and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111399654A (en) * 2020-03-25 2020-07-10 Oppo广东移动通信有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN112328156A (en) * 2020-11-12 2021-02-05 维沃移动通信有限公司 Input device control method and device and electronic device
CN112328156B (en) * 2020-11-12 2022-05-17 维沃移动通信有限公司 Input device control method and device and electronic device
CN115494971A (en) * 2022-10-19 2022-12-20 未来创建(深圳)科技有限公司 Electronic touch pen and input system thereof

Also Published As

Publication number Publication date
CN109960404B (en) 2020-12-18

Similar Documents

Publication Publication Date Title
US10210202B2 (en) Recognition of free-form gestures from orientation tracking of a handheld or wearable device
CN210573659U (en) Computer system, head-mounted device, finger device, and electronic device
KR101791366B1 (en) Enhanced virtual touchpad and touchscreen
EP2987063B1 (en) Virtual tools for use with touch-sensitive surfaces
US10317997B2 (en) Selection of optimally positioned sensors in a glove interface object
CN110389658A (en) For providing the system, apparatus and method of immersion reality interface mode
KR101318244B1 (en) System and Method for Implemeting 3-Dimensional User Interface
KR20160080109A (en) Systems and techniques for user interface control
CN109960404A (en) A kind of data processing method and device
CN110389659A (en) The system and method for dynamic haptic playback are provided for enhancing or reality environment
US5982353A (en) Virtual body modeling apparatus having dual-mode motion processing
US11397478B1 (en) Systems, devices, and methods for physical surface tracking with a stylus device in an AR/VR environment
CN106445118A (en) Virtual reality interaction method and apparatus
Choi et al. 3D hand pose estimation on conventional capacitive touchscreens
Oh et al. FingerTouch: Touch interaction using a fingernail-mounted sensor on a head-mounted display for augmented reality
CN113508355A (en) Virtual reality controller
Bai et al. Asymmetric Bimanual Interaction for Mobile Virtual Reality.
Breslauer et al. Leap motion sensor for natural user interface
CN107633551B (en) The methods of exhibiting and device of a kind of dummy keyboard
KR102322968B1 (en) a short key instruction device using finger gestures and the short key instruction method using thereof
Varga et al. Survey and investigation of hand motion processing technologies for compliance with shape conceptualization
KR101605740B1 (en) Method for recognizing personalized gestures of smartphone users and Game thereof
CN117251058B (en) Control method of multi-information somatosensory interaction system
Tao et al. Human-Computer Interaction Using Fingertip Based on Kinect
Di Qi et al. Towards Intuitive 3D Interactions in Virtual Reality: A Deep Learning-Based Dual-Hand Gesture Recognition Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant