CN106873783A - Information processing method, electronic equipment and input unit - Google Patents
Information processing method, electronic equipment and input unit Download PDFInfo
- Publication number
- CN106873783A CN106873783A CN201710197508.9A CN201710197508A CN106873783A CN 106873783 A CN106873783 A CN 106873783A CN 201710197508 A CN201710197508 A CN 201710197508A CN 106873783 A CN106873783 A CN 106873783A
- Authority
- CN
- China
- Prior art keywords
- input unit
- target area
- virtual scene
- kinematic parameter
- electronic equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
This application provides a kind of information processing method, electronic equipment and input unit, the first mode of operation is in the input unit of electronic equipment, obtain the kinematic parameter of the input unit, and according to the operational factor, the subregion of the virtual scene that selected electronic equipment is currently exported is target area, so as to the operational order based on the input unit, to the scheduled operation of the object in the target area, demand of the user to the process of refinement of virtual scene subregion is met, user's operating experience is improved.
Description
Technical field
Present application relates generally to technical field of virtual reality, more particularly to information processing method, electronic equipment
And input unit.
Background technology
Virtual reality (Virtual Reality, VR) technology is a kind of new human-machine interface technology of Internet era, its
A virtual environment for three dimensions is produced by VR equipment, makes user pass through the simulation of the sense organs such as vision, the sense of hearing, tactile to feel
Know virtual environment, it is also possible to interact by the modes such as movement, expression, voice, gesture and sight line and virtual environment, so that
User produces experience on the spot in person.
However, it has been found that when user carries out interaction in the virtual environment of VR equipment simulatings, can only be to whole virtual
Environment such as is amplified, reduces at the treatment, it is impossible to carries out process of refinement for partial virtual object, have impact on user's operation sense
Receive.
The content of the invention
In view of this, the invention provides a kind of information processing method, electronic equipment and input unit, existing VR is solved
Cannot realize carrying out partial virtual object the technical problem of process of refinement in equipment application.
In order to solve the above-mentioned technical problem, this application provides following technical scheme:
A kind of information processing method, is applied to electronic equipment, and methods described includes:
Obtain the kinematic parameter of the input unit under the first mode of operation;
According to the kinematic parameter, the target area in the virtual scene that the electronic equipment is currently exported is selected, it is described
Target area is the subregion of the virtual scene;
Control the input unit carries out scheduled operation to the object in the target area.
Preferably, it is described according to the kinematic parameter, select the mesh in the virtual scene that the electronic equipment is currently exported
Mark region, including:
When verifying that the kinematic parameter meets pre-conditioned, with institute in the virtual scene that the electronic equipment is currently exported
State the corresponding region of kinematic parameter and be chosen to be target area.
Preferably, it is described according to the kinematic parameter, select the mesh in the virtual scene that the electronic equipment is currently exported
Mark region, including:
It is when the kinematic parameter shows that the movement locus of the input unit forms closed loop, the electronic equipment is current
The region in the closed loop mapping range is located in the virtual scene of output as target area;
When the kinematic parameter shows that the residence time of the touch point of the input unit reaches first threshold, by the electricity
The region of sub- equipment preset range mapping currently in the virtual scene of output using centered on the touch point is used as target area;
And/or;
When the kinematic parameter shows that the input unit meets preset requirement for the operation of predetermined registration operation point, will be default
With the input unit for the predetermined registration operation point the corresponding region of operation as target area.
Preferably, in the operational order based on the input unit, predetermined behaviour is carried out to the object in the target area
Before work, methods described also includes:
The dispaly state of the object in the target area is adjusted, makes the display after the adjustment of object in the target area
State is different from the dispaly state of the object in other regions in the virtual scene.
Preferably, the operational order based on the input unit, makes a reservation for the object in the target area
Operation, including:
When the input unit is switched to the second mode of operation, based on the operational order of the input unit, adjustment is described
The attribute of object in target area in the virtual scene, the attribute includes the dispaly state and/or display location.
A kind of information processing method, is applied to input unit, and methods described includes:
The first operation for input unit is detected, controls the input unit to be in the first mode of operation;
In this first operative mode, the kinematic parameter of the input unit is obtained;
According to the kinematic parameter, determine from the corresponding virtual scene of the input unit corresponding with the kinematic parameter
Target area, the target area is the subregion of the virtual scene;
Control the input unit carries out scheduled operation to the object in the target area.
Preferably, the kinematic parameter for obtaining the input unit includes:
Obtain the data that are gathered of sensor, and by the data is activation to electronic equipment;
Receive the movement locus that electronic equipment is based on the input unit that data identification is obtained.
Preferably, before the control input unit carries out scheduled operation to the object in the target area,
Methods described also includes:
Detection controls the input unit to be switched to the second mode of operation for the second operation of the input unit;
When the object that second mode of operation shows in the target area is in editable state, the control institute
Stating input unit and carrying out scheduled operation to the object in the target area includes:
Detection, to utilize the operational order, is realized to the target area for the operational order of the input unit
The editor of interior object.
A kind of electronic equipment, the electronic equipment includes:
Display, for exporting virtual scene;
Communication module, the operational factor of the input unit for obtaining the electronic equipment, wherein, at the input unit
In the first mode of operation;
Processor, for according to the kinematic parameter, selecting the target area in the virtual scene, and based on described defeated
Enter the operational order of device, control the input unit to carry out scheduled operation to the object in the target area, wherein, it is described
Target area is the subregion of the virtual scene.
A kind of input unit, the input unit includes:
Controller, for detecting during the first operation for the input unit, controls the input unit to be in the
One mode of operation;
Sensor, in this first operative mode, obtaining the kinematic parameter of the input unit;
Processor, for according to the kinematic parameter, determine from the corresponding virtual scene of the input unit with it is described
The corresponding target area of kinematic parameter, and scheduled operation is carried out to the object in the target area for the input unit,
Wherein, the target area is the subregion of the corresponding virtual scene of the input unit.
As can be seen here, compared with prior art, this application provides a kind of information processing method, electronic equipment and input dress
Put, the first mode of operation is in the input unit of electronic equipment, obtain the kinematic parameter of the input unit, and according to the operation
Parameter, the subregion for selecting the virtual scene that electronic equipment is currently exported is target area, so that based on the input unit
Operational order, to the scheduled operation of the object in the target area, meets user and virtual scene subregion is become more meticulous
The demand for the treatment of, improves user's operating experience.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this
Inventive embodiment, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis
The accompanying drawing of offer obtains other accompanying drawings.
A kind of flow chart of information processing method embodiment that Fig. 1 is provided for the application;
The schematic diagram of the target area of a kind of selected virtual scene that Fig. 2 is provided for the application;
The schematic diagram of the target area of the selected virtual scene of another kind that Fig. 3 is provided for the application;
A kind of flow chart of information processing method preferred embodiment that Fig. 4 is provided for the application;
The flow chart of another information processing method embodiment that Fig. 5 is provided for the application;
The partial process view of another information processing method embodiment that Fig. 6 is provided for the application;
The structural representation of a kind of electronic equipment embodiment that Fig. 7 is provided for the application;
The structural representation of another electronic equipment embodiment that Fig. 8 is provided for the application;
A kind of structural representation of input unit embodiment that Fig. 9 is provided for the application;
A kind of structural representation of information processing system embodiment that Figure 10 is provided for the application.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
This application provides a kind of information processing method, electronic equipment and input unit, when detecting for input unit
First operation, control the input unit enter the first mode of operation when, by obtaining the kinematic parameter of the input unit so that
The subregion for selecting the virtual scene that electronic equipment is currently exported accordingly is target area, and then by control input device reality
Now to the scheduled operation of the object in the target area, need of the user to the process of refinement of virtual scene subregion are met
Ask, improve user's operating experience.
In order that the above objects, features and advantages of the present invention can be more obvious understandable, below in conjunction with the accompanying drawings and specifically
The present invention is further detailed explanation for implementation method.
As shown in figure 1, a kind of flow chart of the information processing method embodiment provided for the application, the method can be applied
In the equipment of electronic equipment, i.e. Combining with technology of virtual reality, the application is not construed as limiting to the concrete structure of the electronic equipment, in reality
In the application of border, the method may comprise steps of:
Step S11:The kinematic parameter of the input unit of electronic equipment is obtained, the input unit is in the first mode of operation;
In the present embodiment, the virtual scene for being exported for electronic equipment, when user needs to be carried out mutually with the virtual scene
It is dynamic, such as change viewing angle, seen virtual scene is zoomed in or out, or more fine-grained management is virtually carried out to this,
When raising user experiences, the present embodiment can carry out the first operation to input unit, so that the input unit is in the first work
Operation mode, afterwards, the data that can be collected by the sensor on the input unit determine the motion ginseng of the input unit
Number.
More specifically, when the input unit is the external input devices such as handle, data glove, control stick, can be with root
According to the operating feature of the input unit, corresponding operation button is set on the input unit, so, when needing to current virtual
When the subregion of scene is processed, user can press corresponding operation button, so that the input unit enters first
Mode of operation, the treatment so as to user by input unit realization to the subregion of current virtual scene, concrete processing procedure
It is referred to be described below, this implementation is no longer described in detail again.
, wherein it is desired to explanation, the application is to the structure of above-mentioned input unit and controls it to enter the first Working mould
The mode of formula is not construed as limiting, it is not limited to the above-mentioned implementation enumerated.
In addition, generally under state, above-mentioned input unit is directed to whole virtual scene and is operated, that is to say, that pass through
The operational order of aforesaid operations button output, is accomplished that the treatment to whole virtual scene, and works as the input unit and enter the
After one mode of operation, the input unit will be allowed to select its operating area, that is, select the subregion of current virtual scene to be somebody's turn to do
The operating area of input unit, so as to be passed to the virtual objects of the operational order for the subregion of operation button output
Processed, so as to meet user's request.As can be seen here, above-mentioned first mode of operation can refer to allow input to select it working as
The pattern of the operating area in preceding virtual scene, the application is not especially limited to this.
Optionally, above-mentioned input unit can also be the body part of the user for wearing the electronic equipment, by the body
The detection of body region, recognizes its current pose, so as to using the current pose and default corresponding relation, realize to current virtual
Selection of operating area of scene etc., the application is not construed as limiting to how to detect the mode of the attitudes vibration of body part, can
Current pose is recognized with by black light technology, or the number collected by the various sensors set on the body part
According to current pose for recognizing the body part etc., the application is no longer described in detail one by one herein.
Understood based on above-mentioned analysis, above-mentioned kinematic parameter can include the attitudes vibration of input unit, movement locus and/or
Temporal information etc., the application is not construed as limiting to the particular content that it is included, and it can according to actual needs and input unit
The factor such as concrete structure determine.
Step S12:According to the kinematic parameter, the target area in the virtual scene that electronic equipment is currently exported is selected.
In the present embodiment, selected target area can be the virtual field that electronic equipment is currently presented in the user visual field
Any subregion of the subregion of scape, the specifically virtual scene can according to the determination that is actually needed of user, and, in reality
In the application of border, the target area can be adjusted according to the change of user's request according to above-mentioned, be not changeless.
Optionally, in actual applications, after receiving the kinematic parameter of input unit, can be by verifying the kinematic parameter
Whether meet pre-conditioned, when the kinematic parameter meets pre-conditioned, in the virtual scene that can currently export electronic equipment
Region corresponding with the kinematic parameter is chosen to be target area.
Wherein, the particular content determination of the kinematic parameter of the above-mentioned pre-conditioned input unit that can be based on receiving, right
In different kinematic parameters, the pre-conditioned content can be referred to example in detail below and determined above-mentioned target area with difference
Domain, but it is not limited to the several ways being set forth below.
Mode one:
When the kinematic parameter of the input unit for receiving includes the change in location information or movement locus of the input unit,
Can verify the kinematic parameter show the input unit movement locus formed closed loop, by electronic equipment currently export it is virtual
The region in the closed loop mapping range is located in scene as target area.
More specifically, in actual applications, after input unit enters the first mode of operation, user can be by operation
The partial virtual object i.e. target area for needing in current virtual scene to be processed is drawn a circle to approve in input unit movement.
So that the input unit is as handle as an example, user can needed to carry out essence as after select button with the hand-held handle
A choice box for rectangle frame, circular frame or other shapes is drawn in the virtual objects mapping face of micronization processes, and the application does not make to this
Limit, as shown in Fig. 2 when user needs to process the stool 21 of current virtual scene, then, user can draw a choosing
Select frame 22 to enclose the stool 21 to show for the region where the stool 21 to be chosen to be target area, the selection to other regions
Similar, the application is no longer illustrated one by one herein.Similarly, the process and above-mentioned hand in selected target region are carried out using other input units
The process in handle selected target region is similar to, and the application is also no longer described in detail one by one.
Wherein, in actual applications, user by the choice box that input unit is drawn be input unit move formed close
Ring, can be directly displayed in current virtual scene as shown in Figure 2, so that whether user can directly judge selected region
Correctly, but it is not limited thereto.
Optionally, for the mode of this determination target area, in the application practical application, it is possible to use receive
Position coordinates in the kinematic parameter of input unit, determines the coordinate range of the closed loop that user's mobile input device is formed, it
Afterwards, it is compared with the coordinate range by the coordinate of each virtual objects in the virtual scene that currently exports electronic equipment,
So as to according to comparative result, it is target area to select the virtual objects region in the coordinate range.
It should be noted that the application determines that the detailed process of target area is not construed as limiting to employing mode one, except upper
State outside mode, be also based on the data that sensor is collected being arranged on above-mentioned input unit, recognize the input unit
Movement locus, and then determine corresponding target area in current virtual scene.
Mode two:
When the kinematic parameter of the input unit for receiving includes the positional information that the touch point of the input unit is stopped, with
And the touch point is in the position residence time, can be when verifying that the kinematic parameter shows the stop of touch point of input unit
Between reach first threshold, in the virtual scene that electronic equipment is currently exported centered on the touch point preset range mapping
Region is used as target area.
More specifically, in actual applications, after input unit enters the first mode of operation, the input can be operated to fill
Put and be moved to the region that user in current virtual scene needs to be processed, still as a example by figure 2 above, can touching input unit
Control point is moved at stool 21, and stops certain hour in this place, so that input unit processor can determine user's needs
Stool 21 in current virtual scene is processed.
Wherein, the touch point of above-mentioned input unit can be presented in current virtual scene, so that user intuitively understands this
Input unit is which or which virtual objects to be processed, to enter in time when the target area for finding selection is wrong
Row adjustment, the application is not especially limited to this.
In addition, above-mentioned first threshold values and the preset range centered on certain point can determine according to actual needs, and
Its concrete numerical value is adjustable.Optionally, the preset range can be according to the touch point present position of input unit correspondence
Virtual objects itself design feature determine;It is of course also possible to default corresponding different preset ranges of different residence times, this
Sample, when user needs selected target area larger, the touch point of input unit can in this region heart position stop compared with
For a long time;Conversely, can accordingly be reduced residence time of touch point etc., the application couple determines that the mode of preset range is not limited
It is fixed.
Optionally, in for above-mentioned each embodiment, the closed loop for either being formed in aforesaid way one, or mode two is true
Fixed preset range, can be mapped to the closed loop or preset range institute in the plane, afterwards, then by position by by virtual scene
The region constituted in the virtual objects in the closed loop or preset range is defined as target area, but is not limited thereto.
As can be seen here, for the input unit in aforesaid way two, still can be using above-mentioned such as handle, data glove or behaviour
Make the realization of the external input devices such as bar, it is of course also possible to using user body parts as the input unit, by detecting the body
Target area in the selected current virtual scene of the attitude information at position, specifically, the current appearance of the body part as user
When state is preset posture, show that the virtual scene of now electronic equipment output enters selection state, user can select to need into
The virtual objects of row treatment.
Still by taking the virtual scene shown in Fig. 2 as an example, when the hand 23 for detecting user is the attitude shown in Fig. 2, show to work as
Preceding virtual scene enters the state of may be selected, and afterwards, can detect the movement locus of the forefinger of user's hand, or forefinger institute is in place
Put and its whether the residence time in the position reaches first threshold, if reaching, show that user is needed in the virtual scene
Stool 24 is processed, that is to say, that now the region of stool 24 is target area.It should be noted that on
The body part at family is not limited to the hand in the example, can also be other body parts, and the application is not construed as limiting to this, and
Process using other body part selected target regions is similar, and the application is no longer described in detail one by one herein.
Wherein, the detection of the attitude information on user body parts, it is possible to use black light detects the body of user
The position at position and action etc., it is also possible to which the data that are collected using the sensor being arranged in user body parts are known
Not other attitude of the body part etc., the application is not especially limited to this.
Mode three:
The application can also control to export predetermined registration operation point in virtual scene, such as output predetermined registration operation button, but not
This is confined to, in this case, the kinematic parameter of the above-mentioned input unit for receiving can include that input unit is pre- for this
If the operation of operating point, then in the present embodiment, when the kinematic parameter shows operation of the input unit for the predetermined registration operation point
When meeting preset requirement, can be using default region corresponding with the operation as target area, the application does not make specifically to this
Limit.
Optionally, in actual applications, virtual scene as shown in Figure 3, user can be default with this according to actual needs
Rotated to an angle centered on operating point A, so as to using the virtual objects region in the range of the angle radiation as target area
Domain, it is seen then that in this manner it is achieved that when user needs to process the chair 31 in the virtual scene, can be with default behaviour
Make the mode centered on point A as shown in Figure 3 to rotate to an angle rotation, so that the anglec of rotation is radiated the chair 31, by it
As target area.It should be noted that the selection on other virtual objects can also be adopted in this way, but do not limit to
In this.
Used as the application another kind embodiment, the application can also be according to needing selected target in current virtual scene
Region, the position of the mobile predetermined registration operation point, afterwards, still can selection target region in the manner described above;It is of course also possible to will
Predetermined registration operation point is moved to and needed near the virtual objects of selection, and after entrance selection state, by the mobile predetermined registration operation point
One segment distance, and using the corresponding virtual objects of the segment distance as target area, as shown in figure 3, predetermined registration operation point is moved
After to B, into after selection state, by predetermined registration operation point from B location to location of C where the corresponding virtual objects 32 of this segment distance
Region is used as target area.
It should be noted that the application selectes the target in current virtual scene by the operation to predetermined registration operation point
The mode in region is not limited to the above-mentioned several ways enumerated, and the application will not enumerate herein.
Step S13:Operational order based on the input unit, scheduled operation is carried out to the object in the target area.
Wherein, predetermined registration operation can zoom in or out operation, deletion action, dispaly state (such as brightness, color, transparent
Degree etc.) adjustment operation etc., the operational order of the corresponding input unit for obtaining can zoom in or out instruction, deletion to refer to
Make, dispaly state adjust instruction etc., the application is not construed as limiting to the content of operational order, and it can determine according to actual needs.
In actual applications, after selecting the target area of current virtual scene, when the input unit is an externally input device
When, select button can be unclamped, so that the input unit is switched to the second mode of operation, local selection state is such as exited, or
Into editable state etc., now, the application can utilize the different operating button set on the input unit, by user's root
Corresponding operation button is pressed according to being actually needed, the input unit is generated operational order corresponding with the operation button, concurrently
Electronic equipment is delivered to, so that electronic equipment realizes the respective handling to the destination object using the operational order, user is met
Demand.
Certainly, the application can also determine corresponding operational order according to information such as the different movement locus of input unit,
Realize the treatment to the object in target area;Or, can also by detecting the default movements of parts of the body attitude of user,
Operational order corresponding with the athletic posture is obtained, the treatment to the object in target area is realized, in this case, can be pre-
If the corresponding operational order of different motion attitude, that is, the corresponding relation set up between different motion attitude and various operational orders,
But it is not limited thereto.
As another embodiment of the application, after it is determined that needing the target area for being processed in current virtual scene, can
Corresponding guidance panel is exported to trigger the electronic equipment, now, user can be realized to the mesh according to the prompting of the guidance panel
The treatment of the object in mark region, meets user's request.
Optionally, in order that user more intuitively determines the object in target area, the mesh in selected current virtual scene
Behind mark region, i.e., before step S13, the dispaly state of the object in the target area can also be adjusted, make the target area
Dispaly state after the adjustment of interior object is different from the dispaly state of the object in other regions in the virtual scene, but not office
It is limited to this.
In sum, in the present embodiment, after input unit enters the first mode of operation, can be by obtaining the input
The kinematic parameter of device, so that the subregion for selecting the virtual scene that electronic equipment is currently exported accordingly is target area, and
Operational order based on the input unit, realizes the scheduled operation to the object in the target area, meets user to virtual
The demand of the process of refinement in scene parts region, improves user's operating experience, is conducive to the market expansion of electronic equipment.
Based on foregoing description, as shown in figure 4, a kind of flow of the information processing method preferred embodiment provided for the application
Figure, the method is applied to the electronic equipment of Combining with technology of virtual reality, and the method specifically may comprise steps of:
Step S41:Obtain the kinematic parameter of the input unit under the first mode of operation.
In actual applications, electronic equipment can obtain wirelessly the input unit kinematic parameter that obtains for the treatment of or
The kinematic parameter that direct access input unit is collected, or the direct detection input unit kinematic parameter, the application couple
This is not especially limited.
Step S42:Using the position coordinates in the kinematic parameter, determine that what the movement locus of the input unit formed closes
The coordinate range of ring.
Wherein, the position coordinates can be collected by corresponding position sensor, it is also possible to be examined by alignment system
The position coordinates of input unit is surveyed, the application is not construed as limiting to this.
Step S43:By the position coordinates and the seat of each virtual objects in the electronic equipment currently virtual scene of output
Mark scope is compared, and it is target area to select the virtual objects region in the coordinate range.
In the present embodiment, after electronic equipment output virtual scene, the position coordinates of the virtual objects of the virtual scene
It is usually fixed, wherein, the position coordinates can be not construed as limiting with three-dimensional coordinate, the application to the comparison procedure in step S43.
Step S44:The dispaly state of the object in the target area is adjusted, is made in the target area after the adjustment of object
Dispaly state is different from the dispaly state of the object in other regions in virtual scene.
In actual applications, the dispaly state such as the background color of virtual objects or brightness in target area can be adjusted, this
Application is not construed as limiting to this, as long as can differentiate it with the virtual objects in other regions.
Optionally, it is determined that behind target area in current virtual scene, can also by other means by itself and other
Region differentiates, and such as uses voice message mode, will the position of the target area report out, so as to allow users to clearly
The particular location of the target area.It can be seen that, the application is to by the differentiation side in target area in current virtual scene and other regions
Formula is not construed as limiting.
Step S45:When the input unit is switched to the second mode of operation, the current operation instruction of the input unit is obtained.
Wherein, the switching mode of the mode of operation on the input unit is referred to retouching for above-described embodiment corresponding part
State, this implementation will not be described in detail herein.
Optionally, the current operation instruction of acquisition can include amplification instruction, reduce instruction, deletes instruction or dispaly state
Adjust instruction etc., can specifically determine, the application couple according to user to the process demand of the virtual objects in the target area
The particular content of the current operational order is not construed as limiting.
Step S46:Using the current operational order, the object in the target area is carried out corresponding with current operation instruction
Treatment.
As described above, if current operation instruction is that when zooming in or out instruction, can be put based on the current operational order
Virtual objects that are big or reducing in target area, though the display area of the virtual objects in the target area amplify accordingly or
Reduce, so as to meet user's request;If current operation instruction can directly delete the void in the target area to delete during instruction
Intend object;Similarly, if current operation instruction is dispaly state adjust instruction, can be according in its specific display for needing to adjust
Hold to adjust the dispaly state of virtual objects in the target area, such as adjust the color of virtual objects in the target area, bright
Degree etc., the application is not construed as limiting to this.
As can be seen here, the application uses this information method of adjustment, and user can be according to each void in current virtual scene
Intend the operation requirement of object, directly select some or multiple virtual objects, and it is virtual right to these according to itself requiring it
As being processed, so as to meet user's request, compared with same treatment can only be carried out to whole virtual scene in the prior art, this
Application improves the flexibility and enjoyment to virtual scene treatment, substantially increases Consumer's Experience.
As shown in figure 5, the flow chart of another information processing method embodiment provided for the application, the method can be answered
For input unit, such as handle, data glove or action bars external input device, the application is not construed as limiting to it, then in reality
In the application of border, the method may comprise steps of:
Step S51:The first operation for input unit is detected, controls the input unit to be in the first mode of operation.
In actual applications, user can directly press the select button of input unit, so that the input unit enters
First operational module, such as local selection mode, you can with by needing to be located in input unit selection current virtual scene
The virtual objects of reason.
It is of course also possible to the mode such as movement locus or attitude by detecting the input unit, realizes the input unit
The switching of mode of operation, the application is not especially limited to this.
Step S52:Under first mode of operation, the kinematic parameter of the input unit is obtained.
After being described above, after control input device enters the first mode of operation, user can be entered with the hand-held input unit
Some are operated row, its movement locus is formed a closed loop, or are rotated to an angle, or mobile one end distance, or a certain
Position stops certain hour etc., and obtains corresponding kinematic parameter.It can be seen that, it is specific interior that the application is included to the kinematic parameter
Appearance is not construed as limiting, and it can determine according to factors such as the operating features of user operation habits and the input unit.
Optionally, in actual applications, data can be collected by obtaining the sensor on the input unit, and is based on
The data for obtaining, recognize that this states the movement locus of input unit, constitute the kinematic parameter, but be not limited thereto.
Step S53:According to the kinematic parameter, determine and the kinematic parameter pair from the corresponding virtual scene of the input unit
The target area answered.
The present embodiment can by verifying that the kinematic parameter meets pre-conditioned when, by the corresponding virtual field of the input unit
The corresponding region of the Jing Zhongyu kinematic parameters is chosen to be target area, wherein, this it is pre-conditioned can according to user's request and
The particular content that the kinematic parameter is included determines.
Specifically, when the kinematic parameter shows that the movement locus of the input unit forms closed loop, this can be input into
The region in the closed loop mapping range is located in the corresponding virtual scene of device as target area.
When the kinematic parameter shows that the residence time of the touch point of the input unit reaches first threshold, by input dress
Put the region of preset range mapping in corresponding virtual scene using centered on the touch point as target area.
When the kinematic parameter shows that the input unit meets preset requirement for the operation of predetermined registration operation point, will be default
Region corresponding with the operation that the input unit is directed to the predetermined registration operation point is used as target area.In this manner, this Shen
Please be rotated to an angle centered on the input unit, so as to the virtual objects that are radiated the anglec of rotation as target
Region;The input unit can also be operated to move a segment distance, the region that the segment distance is radiated on preset direction is used as mesh
Mark region etc., but be not limited thereto.
It should be noted that on determining that the mode of the target area in the corresponding virtual scene of input unit is not limited to
In the above-mentioned several ways enumerated, and, implementing for the above-mentioned several ways enumerated is referred to above-described embodiment correspondence
Partial description, this implementation is no longer described in detail one by one herein.
Wherein, the target area of above-mentioned determination can be the subregion of the corresponding virtual scene of the input unit, it is seen then that
The application is realized to the selected of the partial virtual of the virtual scene by the above method, facilitates user to carry out the subregion
Treatment, improves the flexibility that user operates to virtual scene.
Step S54:Control the input unit carries out scheduled operation to the object in the target area.
With reference to the description of above-described embodiment corresponding part, the scheduled operation can include zooming in or out playground, delete behaviour
Work, the adjustment operation of dispaly state etc., the application is not especially limited to this.
In sum, the present embodiment can be based on the kinematic parameter of the input unit, select the corresponding void of the input unit
The subregion for intending scene is target area, so as to be realized to the predetermined of the object in the target area by control input device
Operation, meets demand of the user to the process of refinement of virtual scene subregion, improves user's operating experience.
Optionally, on the basis of above-described embodiment, the information processing method of the application can also include as shown in Figure 6
Step, specifically, can also include:
Step S61:Detection is operated for the second of input unit, and control input device is switched to the second mode of operation.
As described above, when user enters the first mode of operation by pressing the select button of the input unit, now may be used
With by discharging the select button, so that the input unit enters the second mode of operation, such as editable pattern and/or office is exited
Portion's selection mode etc., it is seen then that second mode of operation is different from the first mode of operation.
Certainly, according to actual needs, the input unit can be controlled to enter and the operation by pressing other operation buttons
The corresponding mode of operation of button, the application is not construed as limiting to this.
Step S62:When the object that second mode of operation shows in target area is in editable state, detection is directed to should
The operational order of input unit.
In actual applications, after the editing mode of the virtual objects into target area, user can be by this
Each operation button on input unit is selected, so as to export corresponding operational order, such as amplification instruction, reduce instruction,
Instruction, dispaly state adjust instruction etc. are deleted, the application is not construed as limiting to this.
Certainly, the application can also be obtained and it by detecting the information such as movement locus or attitude of the input unit
Corresponding operational order, the application is not construed as limiting to the mode for obtaining aforesaid operations instruction, is specifically referred to above-described embodiment
The description of corresponding part.
Step S63:Using the operational order, the editor to the object in the target area is realized.
In sum, the application can select to need partial virtual pair to be processed in current virtual scene using input unit
As afterwards, user can be according to current demand, the corresponding operational order exported by the input unit, so as to realize to choosing
The process of refinement of the virtual objects in fixed target area, meets user's request, substantially increases Consumer's Experience.
As shown in fig. 7, the structural representation of a kind of electronic equipment embodiment provided for the application, the electronic equipment can be with
Including:
Display 71, for exporting virtual scene.
In actual applications, the electronic equipment can utilize virtual reality technology, and virtual scene is exported by display 71, from
And give user a kind of sensation on the spot in person, specific work process is the application will not be described in detail herein.
Communication module 72, the operational factor of the input unit for obtaining electronic equipment, input unit is in the first work
Operation mode.
In the present embodiment, when the partial virtual object in the virtual scene that user is needed to the current output of electronic equipment enters
During row treatment, can be operated by input unit, so that the input unit enters the first mode of operation, it is such as local to select
Pattern is selected, now, the partial virtual object that user can select as needed is operated to input unit, such as mobile or rotation
The input unit etc., and the corresponding kinematic parameter of the input unit is obtained, specific acquisition process can refer to above method embodiment
The description of corresponding part, this implementation will not be repeated here.
Wherein, for input unit kinematic parameter acquisition modes, it is not limited to the above-mentioned mode enumerated, can be with
The data collected by the sensor on the input unit, recognize movement locus, position and attitude of the input unit etc.
Information, the application will not be described in detail herein.
Based on foregoing description, the communication module 72 of the application can be wireless communication module, such as WIFI module, RF (Radio
Frequency, radio frequency) module, bluetooth module etc.;Certainly, according to structure needs, the communication module 72 can also be wired mould
Block, as shown in figure 8, now, the input unit can set (but being not limited thereto) on an electronic device, directly by circuit
Data transfer is connected and realizes, the application is not construed as limiting to the structure of the communication module 72.
Optionally, in actual applications, when communication module 72 is wireless communication module, can be entered in input unit
One operational module and after collecting corresponding kinematic parameter, actively sets up with the electronic equipment and communicates to connect, so as to this be moved
Parameter is sent to electronic equipment;Can also be kinematic parameter by the electronic equipment real-time detection input unit etc., the application couple
Communication connection and data transfer mode between input unit and the electronic equipment are not especially limited.
Processor 73, for according to the kinematic parameter, selecting the target area in virtual scene, and based on the input unit
Operational order, control the input unit to carry out scheduled operation to the object in target area.
It should be noted that in this embodiment, selected target area is typically the virtual field of the currently output of display 71
The subregion of scape, its each virtual objects that treatment can be currently needed for including user, so, according to changing for user's actual need
Become, the content that selected target area includes also can accordingly change.
Optionally, in actual applications, in order to select the target area in current virtual scene, the processor 73 checking should
It is pre-conditioned whether kinematic parameter meets, if meeting pre-conditioned, can be by the electronic equipment currently virtual scene of output
Region corresponding with the kinematic parameter is chosen to be target area.
Wherein, the particular content determination of the kinematic parameter of the above-mentioned pre-conditioned input unit that can be based on receiving, this
Application is not especially limited to this, and processor 73 can 3 kinds of the mode one to three of embodiment corresponding part according to the method described above
Mode selected target region, but do not limit to and this.
Optionally, for aforesaid way one, processor 73 can specifically utilize the position coordinates in the kinematic parameter, really
The position of each virtual objects in the fixed coordinate range for forming closed loop, and the virtual scene that the electronic equipment is currently exported
Coordinate is compared with the coordinate range, and it is target area to select the virtual objects region in the coordinate range
Domain.
Now, the electronic equipment can also include position sensor or alignment system to detect the position letter of input unit
Breath, such as above-mentioned position coordinates, or after communication module 72 obtains the data that the sensor on input unit is collected, treatment
Device 73 is based on the data, identifies movement of input device track, and then completes subsequent treatment etc., and the application does not make specifically to this
Limit.
As another embodiment of the application, in order to allow users to intuitively learn the target selected in current virtual scene
Region, processor 73 can be also used for adjusting the dispaly state of the object in the target area, such as background color, brightness, big
It is small etc., so that the dispaly state in the target area after the adjustment of object is different from the virtual scene in other regions
The dispaly state of object.
Certainly, the application can report position and/or its void for including for the target area selected by voice module
Intend object, so that the virtual objects processed the need for informing user's current selected in time, when finding selection mistake so as to user, and
When the selected object of adjustment, the information processing manner that specific adjustment mode can still be provided according to the application realizes, the present embodiment
Will not be described in detail herein.
In addition, when determining that input unit is switched to the second mode of operation, such as editable pattern or local selection mode is exited
Etc., processor 73 can also currently obtain the current operation instruction of the input unit, and utilize the current operational order, to mesh
Object in mark region is carried out and the current operation corresponding treatment of instruction.
For example, when above-mentioned second mode of operation shows that object in target area is in editable state, acquisition is worked as
Preceding operational order can include zooming in or out instruction, delete instruction, dispaly state adjust instruction etc., and the application does not make to this
It is specific to limit, and respective handling process can refer to the description of above method embodiment corresponding part, the present embodiment is no longer gone to live in the household of one's in-laws on getting married herein
State.
In sum, in the present embodiment, electronic equipment obtains the motion of the input unit under the first mode of operation
After parameter, the subregion that according to the kinematic parameter, will select the virtual scene that electronic equipment is currently exported is target area, from
And the operational order of input unit is based on, control input device realizes the scheduled operation to the object in the target area, meets
Demand of the user to the process of refinement of virtual scene subregion, improves user's operating experience.
As shown in figure 9, a kind of structural representation of the input unit embodiment provided for the application, in actual applications,
The input unit can be the equipment such as handle, action bars or data glove, and the application is not especially limited to this, the present embodiment
In, the input unit can include:
Controller 91, for detecting the first operation for the input unit, control input device is in the first Working mould
Formula.
With reference to the description of above method embodiment corresponding part, when user is wished to the partial virtual in current virtual scene
Object is processed, and when meeting self-demand, can press the select button on input unit, now, the control of the input unit
Device processed 91 can be by detecting the output signal of each select button, it is determined that currently for the operation of the input unit, so as to control
Input unit performs corresponding operation.
Optionally, when multiple operation buttons are provided with input unit, which is operated in order to detect user in time
Button is operated, can be with equipment and this multiple one-to-one sensors of operation button, such as position sensor, crooked sensory
Device, angular displacement sensor etc., by the sensor senses user to the trigger action of each operation button, so that the basis of controller 91
The signal that sensor is detected, it is determined that for the operation of input unit, but be not limited thereto.
Optionally, in actual applications, when can preset the different operating carried out to input unit, each operation makes defeated
Enter the mode of operation that device is carried out, can specifically set up the corresponding relation of different operating and each mode of operation, the application to this not
It is construed as limiting.
Still by taking the select button that above-mentioned user presses input unit as an example, now controller 91 can be entered with control input device
Enter local selection mode, that is, allow user to the partial virtual object that processes the need in the corresponding virtual scene of the input unit
Selected, be that the follow-up treatment to the partial virtual object is laid a good foundation.
Certainly, if user press be input unit other operation buttons, controller 91 can enter with control input device
Enter mode of operation corresponding with the operation button;In other words, after determining the parameters such as the current pose of the input unit, can also be straight
Connect control input device and be directly entered mode of operation corresponding with the parameter, the application is specific to this above-mentioned first mode of operation
Content and how to trigger input unit and be not construed as limiting into the mode of first mode of operation.
Sensor 92, under first mode of operation, obtaining the kinematic parameter of input unit.
After it is determined that input unit enters the first mode of operation, can complete mobile by the hand-held input unit or rotate
Deng action, the selection to regional area in the corresponding virtual scene of the input unit is realized, in this regard, the present embodiment can be by phase
The sensor answered detects the data such as position, action of the input unit, so that it is determined that the kinematic parameter of the input unit.
As can be seen here, the sensor 92 can include position tactility apparatus, angular displacement sensor, bend sensor, flexible biography
Sensor or photo-detector, i.e., be calculated input unit current location or action etc. using principles such as light line reflection or refractions
Deng the application is not construed as limiting to the particular content that the sensor 92 is included, and it can determine according to the actual requirements, and the application is herein
No longer describe in detail one by one.
Processor 93, for according to the kinematic parameter, determine from the corresponding virtual scene of the input unit with it is described
The corresponding target area of kinematic parameter, and scheduled operation is carried out to the object in the target area for the input unit;
Wherein, the target area can be the subregion of the corresponding virtual scene of input unit, i.e., user is needed to this
The region that virtual scene is processed, it can be adjusted according to the change of the currently processed demand of user, that is to say, that the mesh
Mark region not immobilizes.
Optionally, when it needs to be determined that during target area in the corresponding virtual scene of input unit, processor 93 can lead to
When the kinematic parameter for crossing checking acquisition input unit meets pre-conditioned, directly will be corresponding with the kinematic parameter in the virtual scene
Region be chosen to be target area, can specifically be accomplished by the following way, but be not limited thereto.
In actual applications, processor 93 specifically can be used for showing the movement locus shape of input unit when kinematic parameter
During into closed loop, such as description of above-mentioned embodiment of the method corresponding part, hand-held input unit of user needs place in virtual scene
A closed loop, such as frame of various figures are drawn in the region of reason, so as to enclose the region in the graphical boxes as target area, also
It is to say, by the region in the corresponding virtual scene of input unit in the closed loop mapping range as target area.
Optionally, processor 93 can also show that the residence time of the touch point of the input unit reaches in kinematic parameter
First threshold, the region of the preset range mapping in the corresponding virtual scene of the input unit centered on the touch point is made
It is target area.
For this mode, user can be docked in virtual scene in needing region to be processed with the hand-held input unit
Heart position, so that using the region in the preset range as center as target area, it is of course also possible in this manner
The many places marginal position for needing region to be processed in virtual scene stops, so as to be drawn a circle to approve by border with these stop places
Region is not construed as limiting as target area etc., the application to this.
In addition, processor 93 can also show that input unit meets default for the operation of predetermined registration operation point in kinematic parameter
It is required that, using default region corresponding with the input unit is directed to predetermined registration operation point operation as target area.
For this mode, user can be rotated to an angle with the hand-held input unit in its corresponding virtual scene,
So as to the region in the spatial dimension that radiates the anglec of rotation is used as target area, that is to say, that user determines the virtual field
After region to be processed is needed in scape, the input unit can be controlled with the region while being that starting point rotates to the opposite side in the region,
As shown in figure 3, using the input unit as predetermined registration operation point A, but can now be not limited thereto, above method reality is referred to
The description of the other modes of a corresponding part is applied, the present embodiment will not be repeated here.
As another embodiment of the application, behind selected target region in the corresponding virtual scene of input unit, above-mentioned control
Device processed 91 can be also used for detecting the second operation for the input unit, and control input device is switched to the second Working mould
Formula.
Wherein, second mode of operation can exit local selection mode and/or editable pattern, the application to this not
Make specific restriction, it is generally corresponding with above-mentioned first mode of operation.
Optionally, when the object that the second mode of operation shows in target area is in editable state, processor 93 may be used also
Using the operational order for the input unit for detecting, the editor to the object in the target area is realized, such as to this
Object in target area zooms in or out treatment or directly deletes, or adjusts its dispaly state etc., can basis
The particular content that the operational order for detecting is included determines, and the operational order can be according to user to object in the target area
Process demand determine that the application is not construed as limiting to this.
In sum, the present embodiment can be based on the kinematic parameter of the input unit, select the corresponding void of the input unit
The subregion for intending scene is target area, so as to be realized to the predetermined of the object in the target area by control input device
Operation, meets demand of the user to the process of refinement of virtual scene subregion, improves user's operating experience.
As shown in Figure 10, a kind of structural representation of the information processing system embodiment for being provided for the application, the system can
With including electronic equipment 101 and input unit 102.
It should be noted that the number of the electronic equipment 101 that includes of the application information processing system and input unit 102
Amount is not construed as limiting, and it can determine according to actual needs.
And, the electronics that the composition structure and its function of the electronic equipment 101 can refer to described in said apparatus embodiment sets
Standby composition structure and its function, the present embodiment will not be repeated here;Similarly, the composition structure and its work(of the input unit 102
Can also can be will not be described in detail herein with reference to the description of above-mentioned input unit embodiment, this implementation.
Optionally, when the information processing system includes multiple input units, this multiple input unit can have not
Same structure, the application is not construed as limiting to this.
In addition, can be arranged in electronic equipment 101 on above-mentioned input unit 102, as shown in Figure 8 above, but not
It is confined to this.
With reference to foregoing description, the application is operated by detection for the first of input unit, and controls the input to fill accordingly
Put into after the first mode of operation, by obtaining the kinematic parameter of the input unit, so that it is currently defeated to select electronic equipment accordingly
The subregion of the virtual scene for going out i.e. target area, and the operational order based on the input unit, control input device are realized
To the scheduled operation of the object in the target area, need of the user to the process of refinement of virtual scene subregion are met
Ask, improve user's operating experience.
It is last, it is necessary to explanation, in the various embodiments described above, term " including ", "comprising" or its any other
Variant is intended to including for nonexcludability, so that process, method or system including a series of key elements not only include
Those key elements, but also other key elements including being not expressly set out, or also include being this process, method or system
Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that
Also there is other identical element in process, method or system including the key element.
Each embodiment is described by the way of progressive in this specification, and what each embodiment was stressed is and other
The difference of embodiment, between each embodiment identical similar portion mutually referring to.For electronics disclosed in embodiment
For equipment, input unit and system, because it is corresponding with method disclosed in embodiment, so description is fairly simple, it is related
Part is referring to method part illustration.
The foregoing description of the disclosed embodiments, enables professional and technical personnel in the field to realize or uses the present invention.
Various modifications to these embodiments will be apparent for those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, the present invention
The embodiments shown herein is not intended to be limited to, and is to fit to and principles disclosed herein and features of novelty phase one
The scope most wide for causing.
Claims (10)
1. a kind of information processing method, it is characterised in that including:
The kinematic parameter of the input unit of electronic equipment is obtained, wherein, the input unit is in the first mode of operation;
According to the kinematic parameter, the target area in the virtual scene that the electronic equipment is currently exported, the target are selected
Region is the subregion of the virtual scene;
Based on the operational order of the input unit, scheduled operation is carried out to the object in the target area.
2. method according to claim 1, it is characterised in that described according to the kinematic parameter, selectes the electronics and sets
Target area in the virtual scene of standby current output, including:
Verify that the kinematic parameter meets pre-conditioned, with the motion in the virtual scene that the electronic equipment is currently exported
The corresponding region of parameter is chosen to be target area.
3. method according to claim 1, it is characterised in that described according to the kinematic parameter, selectes the electronics and sets
Target area in the virtual scene of standby current output, including:
When the kinematic parameter shows that the movement locus of the input unit forms closed loop, the electronic equipment is currently exported
Virtual scene in be located at the closed loop mapping range in region as target area;Or;
When the kinematic parameter shows that the residence time of the touch point of the input unit reaches first threshold, the electronics is set
The region of the preset range mapping in the virtual scene of standby current output using centered on the touch point is used as target area;Or;
When the kinematic parameter shows that the input unit meets preset requirement for the operation of predetermined registration operation point, will it is default and
The input unit is directed to the corresponding region of operation of the predetermined registration operation point as target area.
4. method according to claim 1, it is characterised in that in the operational order based on the input unit, to the mesh
Before object in mark region carries out scheduled operation, methods described also includes:
The dispaly state of the object in the target area is adjusted, makes the display shape after the adjustment of object in the target area
State is different from the dispaly state of the object in other regions in the virtual scene.
5. the method according to claim 1-4 any one, it is characterised in that the operation based on the input unit
Instruction, scheduled operation is carried out to the object in the target area, including:
When the input unit is switched to the second mode of operation, based on the operational order of the input unit, the target is adjusted
The attribute of object in region in the virtual scene, the attribute includes the dispaly state and/or display location.
6. a kind of information processing method, it is characterised in that including:
The first operation for input unit is detected, controls the input unit to be in the first mode of operation;
In this first operative mode, the kinematic parameter of the input unit is obtained;
According to the kinematic parameter, mesh corresponding with the kinematic parameter is determined from the corresponding virtual scene of the input unit
Mark region, the target area is the subregion of the virtual scene;
Control the input unit carries out scheduled operation to the object in the target area.
7. method according to claim 6, it is characterised in that the kinematic parameter of the acquisition input unit includes:
Obtain the data that are gathered of sensor, and by the data is activation to electronic equipment;
Receive the movement locus that electronic equipment is based on the input unit that data identification is obtained.
8. the method according to claim 6 or 7, it is characterised in that in the control input unit to the target
Before object in region carries out scheduled operation, methods described also includes:
Detection controls the input unit to be switched to the second mode of operation for the second operation of the input unit;
When the object that second mode of operation shows in the target area is in editable state, the control is described defeated
Entering device and carrying out scheduled operation to the object in the target area includes:
Detection, to utilize the operational order, is realized in the target area for the operational order of the input unit
The editor of object.
9. a kind of electronic equipment, it is characterised in that including:
Display, for exporting virtual scene;
Communication module, the operational factor of the input unit for obtaining the electronic equipment, wherein, the input unit is in the
One mode of operation;
Processor, for according to the kinematic parameter, selecting the target area in the virtual scene, and based on the input dress
The operational order put, controls the input unit to carry out scheduled operation to the object in the target area, wherein, the target
Region is the subregion of the virtual scene.
10. a kind of input unit, it is characterised in that including:
Controller, for detecting during the first operation for the input unit, controls the input unit to be in the first work
Operation mode;
Sensor, in this first operative mode, obtaining the kinematic parameter of the input unit;
Processor, for according to the kinematic parameter, determining from the corresponding virtual scene of the input unit and the motion
The corresponding target area of parameter, and scheduled operation is carried out to the object in the target area for the input unit, wherein,
The target area is the subregion of the corresponding virtual scene of the input unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710197508.9A CN106873783A (en) | 2017-03-29 | 2017-03-29 | Information processing method, electronic equipment and input unit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710197508.9A CN106873783A (en) | 2017-03-29 | 2017-03-29 | Information processing method, electronic equipment and input unit |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106873783A true CN106873783A (en) | 2017-06-20 |
Family
ID=59159612
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710197508.9A Pending CN106873783A (en) | 2017-03-29 | 2017-03-29 | Information processing method, electronic equipment and input unit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106873783A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107908281A (en) * | 2017-11-06 | 2018-04-13 | 北京小米移动软件有限公司 | Virtual reality exchange method, device and computer-readable recording medium |
CN109983424A (en) * | 2017-06-23 | 2019-07-05 | 腾讯科技(深圳)有限公司 | Object selected method and apparatus and virtual reality device in virtual reality scenario |
CN110478902A (en) * | 2019-08-20 | 2019-11-22 | 网易(杭州)网络有限公司 | Game operation method and device |
CN112650390A (en) * | 2020-12-22 | 2021-04-13 | 科大讯飞股份有限公司 | Input method, related device and input system |
CN113110887A (en) * | 2021-03-31 | 2021-07-13 | 联想(北京)有限公司 | Information processing method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102023706A (en) * | 2009-09-15 | 2011-04-20 | 帕洛阿尔托研究中心公司 | System for interacting with objects in a virtual environment |
CN105009039A (en) * | 2012-11-30 | 2015-10-28 | 微软技术许可有限责任公司 | Direct hologram manipulation using IMU |
CN105009034A (en) * | 2013-03-08 | 2015-10-28 | 索尼公司 | Information processing apparatus, information processing method, and program |
CN106200942A (en) * | 2016-06-30 | 2016-12-07 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106249882A (en) * | 2016-07-26 | 2016-12-21 | 华为技术有限公司 | A kind of gesture control method being applied to VR equipment and device |
CN106249879A (en) * | 2016-07-19 | 2016-12-21 | 深圳市金立通信设备有限公司 | The display packing of a kind of virtual reality image and terminal |
CN106415444A (en) * | 2014-01-23 | 2017-02-15 | 微软技术许可有限责任公司 | Gaze swipe selection |
-
2017
- 2017-03-29 CN CN201710197508.9A patent/CN106873783A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102023706A (en) * | 2009-09-15 | 2011-04-20 | 帕洛阿尔托研究中心公司 | System for interacting with objects in a virtual environment |
CN105009039A (en) * | 2012-11-30 | 2015-10-28 | 微软技术许可有限责任公司 | Direct hologram manipulation using IMU |
CN105009034A (en) * | 2013-03-08 | 2015-10-28 | 索尼公司 | Information processing apparatus, information processing method, and program |
CN106415444A (en) * | 2014-01-23 | 2017-02-15 | 微软技术许可有限责任公司 | Gaze swipe selection |
CN106200942A (en) * | 2016-06-30 | 2016-12-07 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106249879A (en) * | 2016-07-19 | 2016-12-21 | 深圳市金立通信设备有限公司 | The display packing of a kind of virtual reality image and terminal |
CN106249882A (en) * | 2016-07-26 | 2016-12-21 | 华为技术有限公司 | A kind of gesture control method being applied to VR equipment and device |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109983424A (en) * | 2017-06-23 | 2019-07-05 | 腾讯科技(深圳)有限公司 | Object selected method and apparatus and virtual reality device in virtual reality scenario |
US10888771B2 (en) | 2017-06-23 | 2021-01-12 | Tencent Technology (Shenzhen) Company Limited | Method and device for object pointing in virtual reality (VR) scene, and VR apparatus |
US11307677B2 (en) | 2017-06-23 | 2022-04-19 | Tencent Technology (Shenzhen) Company Limited | Method and device for object pointing in virtual reality (VR) scene using a gamepad, and VR apparatus |
CN107908281A (en) * | 2017-11-06 | 2018-04-13 | 北京小米移动软件有限公司 | Virtual reality exchange method, device and computer-readable recording medium |
CN110478902A (en) * | 2019-08-20 | 2019-11-22 | 网易(杭州)网络有限公司 | Game operation method and device |
CN112650390A (en) * | 2020-12-22 | 2021-04-13 | 科大讯飞股份有限公司 | Input method, related device and input system |
CN113110887A (en) * | 2021-03-31 | 2021-07-13 | 联想(北京)有限公司 | Information processing method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106873783A (en) | Information processing method, electronic equipment and input unit | |
KR101522991B1 (en) | Operation Input Apparatus, Operation Input Method, and Program | |
KR20240009999A (en) | Beacons for localization and content delivery to wearable devices | |
KR101533319B1 (en) | Remote control apparatus and method using camera centric virtual touch | |
JP5558000B2 (en) | Method for controlling control point position in command area and method for controlling apparatus | |
JP5167523B2 (en) | Operation input device, operation determination method, and program | |
JP5515067B2 (en) | Operation input device, operation determination method, and program | |
KR101224351B1 (en) | Method for locating an object associated with a device to be controlled and a method for controlling the device | |
US20130204408A1 (en) | System for controlling home automation system using body movements | |
KR20110008313A (en) | Image recognizing device, operation judging method, and program | |
CN102880304A (en) | Character inputting method and device for portable device | |
US9405403B2 (en) | Control apparatus, operation controlling method and non-transitory computer-readable storage medium | |
CN104349195A (en) | Control method of multipurpose remote controller of intelligent TV and control system thereof | |
CN106468917B (en) | A kind of long-range presentation exchange method and system of tangible live real-time video image | |
CN109800045A (en) | A kind of display methods and terminal | |
CN106020454A (en) | Smart terminal touch screen operation method and system based on eye control technology | |
CN106663365A (en) | Method of obtaining gesture zone definition data for a control system based on user input | |
CN103152467A (en) | Hand-held electronic device and remote control method | |
KR101654311B1 (en) | User motion perception method and apparatus | |
US10013802B2 (en) | Virtual fitting system and virtual fitting method | |
CN106325480A (en) | Line-of-sight tracing-based mouse control device and method | |
CN104142736B (en) | Video monitoring equipment control method and device | |
KR20120136719A (en) | The method of pointing and controlling objects on screen at long range using 3d positions of eyes and hands | |
CN108897477A (en) | A kind of method of controlling operation thereof and terminal device | |
KR101542671B1 (en) | Method and apparatus for space touch |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |