CN106201213A - The control method of a kind of virtual reality focus and terminal - Google Patents
The control method of a kind of virtual reality focus and terminal Download PDFInfo
- Publication number
- CN106201213A CN106201213A CN201610571853.XA CN201610571853A CN106201213A CN 106201213 A CN106201213 A CN 106201213A CN 201610571853 A CN201610571853 A CN 201610571853A CN 106201213 A CN106201213 A CN 106201213A
- Authority
- CN
- China
- Prior art keywords
- focus
- information
- area
- touch
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Embodiments providing control method and the terminal of a kind of virtual reality focus, wherein, described method includes: obtain the first touch data inputted by contact panel;Determine whether the touch operation that described first touch data is corresponding is default focal point control instruction;If the touch operation that described first touch data is corresponding is default focal point control instruction, determine the information of focus zone of action in virtual reality what comes into a driver's according to described first touch data, and the focus in described virtual reality what comes into a driver's is moved to described focus zone of action;Obtain the refinement information that described focus is corresponding, and adjust the position of described focus according to described refinement information.Focus in virtual reality what comes into a driver's can be accurately positioned by embodiment of the present invention terminal.
Description
Technical field
The present invention relates to electronic technology field, particularly relate to control method and the terminal of a kind of virtual reality focus.
Background technology
Virtual reality (Virtual Reality, VR) is to utilize computer to generate a kind of simulated environment, utilizes multi-source information
The interactive three-dimensional dynamic vision merged and the system emulation of entity behavior make user be immersed in this environment.
Smart mobile phone, panel computer etc. can be put into virtual reality glasses viewing 3D with the terminal of display screen by user
Video, sees virtual tourism scenic spot etc..
Current VR glasses mainly control VR focus by the way of manual contact panel and move, by focus from dynamic vision
The option of scape selects, thus controls VR dynamic vision.Wherein, VR focus is for positioning at dynamic vision.
But, when controlling VR dynamic vision by manual contact panel, it is typically easy to excessive paddling or paddling is not enough,
VR focus cannot be accurately positioned.
Summary of the invention
The embodiment of the present invention provides control method and the terminal of a kind of virtual reality focus, it is possible to in virtual reality what comes into a driver's
Focus be accurately positioned.
First aspect, embodiments provides the control method of a kind of virtual reality focus, and the method includes:
Obtain the first touch data inputted by contact panel;
Determine whether the touch operation that described first touch data is corresponding is default focal point control instruction;
If the touch operation that described first touch data is corresponding is default focal point control instruction, touch according to described first
Touch data and determine the information of the focus zone of action in virtual reality what comes into a driver's, and the focus in described virtual reality what comes into a driver's is moved
To described focus zone of action;
Obtain the refinement information that described focus is corresponding, and adjust the position of described focus according to described refinement information.
On the other hand, embodiments providing a kind of terminal, this terminal includes:
Acquiring unit, for obtaining the first touch data inputted by contact panel;
Determine unit, for determining that the focal point control whether touch operation that described first touch data is corresponding is default refers to
Order;
Control unit, if being default focal point control instruction for the touch operation that described first touch data is corresponding,
Determine the information of focus zone of action in virtual reality what comes into a driver's according to described first touch data, and described virtual reality is regarded
Focus in scape moves to described focus zone of action;
Adjustment unit, for obtaining the refinement information that described focus is corresponding, and adjusts described Jiao according to described refinement information
The position of point.
The first touch data that the embodiment of the present invention is inputted by contact panel by acquisition, and determining the first touch number
When being default focal point control instruction according to corresponding touch operation, determine the letter of focus zone of action according to the first touch data
Breath, and the focus in virtual reality what comes into a driver's is moved to focus zone of action;And obtain the refinement information that focus is corresponding, and root
The position of focus is adjusted, it is possible to virtual reality focus is accurately positioned according to refinement information.
Accompanying drawing explanation
In order to be illustrated more clearly that embodiment of the present invention technical scheme, required use in embodiment being described below
Accompanying drawing is briefly described, it should be apparent that, the accompanying drawing in describing below is some embodiments of the present invention, general for this area
From the point of view of logical technical staff, on the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the schematic flow diagram of the control method of a kind of virtual reality focus that the embodiment of the present invention provides;
Fig. 2 is the schematic flow diagram of the control method of a kind of virtual reality focus that another embodiment of the present invention provides;
Fig. 3 is embodiment of the present invention contact panel and the schematic diagram of virtual reality what comes into a driver's;
Fig. 4 is embodiment of the present invention virtual reality focus schematic diagram;
Fig. 5 is the schematic block diagram of a kind of terminal that the embodiment of the present invention provides;
Fig. 6 is a kind of terminal schematic block diagram that another embodiment of the present invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is a part of embodiment of the present invention rather than whole embodiments wholely.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under not making creative work premise
Example, broadly falls into the scope of protection of the invention.
Should be appreciated that when using in this specification and in the appended claims, term " includes " and " comprising " instruction
Described feature, entirety, step, operation, element and/or the existence of assembly, but it is not precluded from one or more further feature, whole
Body, step, operation, element, assembly and/or the existence of its set or interpolation.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh describing specific embodiment
And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on
Hereafter clearly indicating other situation, otherwise " ", " " and " being somebody's turn to do " of singulative is intended to include plural form.
It will be further appreciated that, the term "and/or" used in description of the invention and appended claims is
Refer to the one or more any combination being associated in the item listed and likely combine, and including that these combine.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if be detected that [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to really
Fixed " or " [described condition or event] once being detected " or " in response to [described condition or event] being detected ".
In implementing, the terminal described in the embodiment of the present invention includes but not limited to such as have touch sensitive surface
Other of mobile phone, laptop computer or the tablet PC of (such as, touch panel display and/or touch pad) etc
Portable set.It is to be further understood that in certain embodiments, described equipment not portable communication device, but have
The desk computer of touch sensitive surface (such as, touch panel display and/or touch pad).
In discussion below, describe the terminal including display and touch sensitive surface.It is, however, to be understood that
It is that terminal can include such as physical keyboard, mouse and/or control other physical user-interface device one or more of bar.
Terminal supports various application programs, such as following in one or more: drawing application program, demonstration application journey
Sequence, word-processing application, website create application program, dish imprinting application program, spreadsheet applications, game application
Program, telephony application, videoconference application, email application, instant messaging applications, exercise
Support the application of application program, photo management application program, digital camera application program, digital camera application program, web-browsing
Program, digital music player application and/or video frequency player application program.
The various application programs that can perform in terminal can use at least one of such as touch sensitive surface public
Physical user-interface device.Among applications and/or can adjust in corresponding application programs and/or change and touch sensitive table
The corresponding information of display in one or more functions in face and terminal.So, the public physical structure of terminal (such as, touches
Sensing surface) the various application programs with the most directly perceived and transparent user interface can be supported.
Refer to the exemplary flow that Fig. 1, Fig. 1 are the control methods of a kind of virtual reality focus that the embodiment of the present invention provides
Figure.In the present embodiment, the executive agent of the control method of virtual reality focus is terminal, and terminal is virtual reality display terminal, empty
Intending reality display terminal can be virtual reality glasses, but is not limited to this.Virtual reality display terminal has contact panel, touches
Control panel surface has the appreciable particulate material of user, but this particulate material is not presented in virtual reality what comes into a driver's.As
The control method of virtual reality focus shown in Fig. 1 can comprise the following steps that
S101: obtain the first touch data inputted by contact panel.
When user needs the virtual reality to terminal (VR) focus to position, user is defeated by the contact panel of terminal
Enter the first touch data.Wherein, this VR focus is carried out for indicating in user's tab from virtual reality what comes into a driver's or menu
Select.
Terminal obtains the first touch data that user is inputted by contact panel.
S102: determine whether the touch operation that described first touch data is corresponding is default focal point control instruction.
Terminal determines the touch operation that the first touch data got is corresponding, and judges whether this touch operation is default
Focal point control instruction.Wherein, touch operation can include clicking operation, slide etc., but is not limited to this.Clicking operation
Can include single clicing on operation, it is also possible to include repeatedly clicking operation.
Preset focal point control instruction can be preset clicking operation, can also for preset slide (such as, slide
The start position of dynamic track mates with the start position preset), but it is not limited to this, it is also possible to it is other touch operation, tool
Body can be configured according to practical situation, does not limits.
When the touch operation that terminal check the first touch data is corresponding is default focal point control instruction, perform step
S103;Otherwise it is left intact, terminates this flow process, return step S101.
S103: if touch operation corresponding to described first touch data is default focal point control instruction, according to described
First touch data determines the information of the focus zone of action in virtual reality what comes into a driver's, and by Jiao in described virtual reality what comes into a driver's
Point is mobile to described focus zone of action.
Terminal, when confirming that the touch operation that the first touch data is corresponding is default focal point control instruction, is touched according to first
Touch data and determine the information of the focus zone of action in virtual reality what comes into a driver's, and the focus in virtual reality what comes into a driver's is moved to
The focus zone of action that one touch data is corresponding.Wherein, the information of focus zone of action include the area of focus zone of action with
And positional information.
Focus is moved to focus zone of action corresponding to the first touch data by terminal, the of short duration change in focal position, by
The origin position of virtual reality what comes into a driver's moves to focus zone of action, and this focus can move in focus zone of action.Virtual existing
The origin position of real what comes into a driver's can be the center of virtual reality what comes into a driver's, it is also possible to be the center of contact panel, herein
Do not limit.
It is understood that the focus in virtual reality what comes into a driver's can be moved the centre bit to focus zone of action by terminal
Put, it is also possible to be optional position, focus zone of action.
S104: obtain the refinement information that described focus is corresponding, and adjust the position of described focus according to described refinement information.
Terminal, after the focus in virtual reality what comes into a driver's being moved to focus zone of action, obtains corresponding micro-of this focus
Tune information, and adjust the position of focus further according to refinement information, to realize the focus in virtual reality what comes into a driver's is carried out essence
Determine position, thus the menu bar or the tab that are comprised from virtual reality what comes into a driver's by this focus instruction user are selected.
Wherein, the refinement information that the focus in virtual reality what comes into a driver's is corresponding can be user input by contact or non-
The mode of contact inputs, and does not limits.Contactless mode can include by head or eye input focus pair
The refinement information answered, thus adjust the position of focus.
If it is understood that terminal judges is in step s 103, the focus of virtual reality what comes into a driver's is moved by terminal
Move the target location to focus zone of action, then, refinement information is zero, and terminal need not adjust further the position of focus
Put.
Such scheme, terminal is obtained the first touch data inputted by contact panel, and is determining the first touch data
When corresponding touch operation is default focal point control instruction, determine the focus in virtual reality what comes into a driver's according to the first touch data
The information of zone of action, and the focus in virtual reality what comes into a driver's is moved to focus zone of action;And it is corresponding to obtain focus
Refinement information, and the position of focus is adjusted according to refinement information, it is possible to virtual reality focus is accurately positioned, it is simple to user
The menu bar or the tab that are comprised from virtual reality what comes into a driver's by this focus are selected, thus control virtual reality what comes into a driver's figure
Picture.
Refer to the signal that Fig. 2, Fig. 2 are the control methods of a kind of virtual reality focus that another embodiment of the present invention provides
Flow chart.In the present embodiment, the executive agent of the control method of virtual reality focus is terminal, and terminal is that virtual reality display is whole
End, virtual reality display terminal can be virtual reality glasses, but be not limited to this.Virtual reality display terminal has touch surface
Plate, contact panel surface has the appreciable particulate material of user, but this particulate material is not presented on virtual reality what comes into a driver's
In.The control method of virtual reality focus can comprise the following steps that as shown in Figure 2
S201: obtain the first touch data inputted by contact panel.
When user needs the virtual reality to terminal (VR) focus to position, user is defeated by the contact panel of terminal
Enter the first touch data.Wherein, this VR focus is carried out for indicating in user's tab from virtual reality what comes into a driver's or menu
Select.
Terminal obtains the first touch data that user is inputted by contact panel.
S202: determine whether the touch operation that described first touch data is corresponding is default focal point control instruction.
Terminal determines the touch operation that the first touch data got is corresponding, and judges whether this touch operation is default
Focal point control instruction.Wherein, touch operation can include clicking operation, slide etc., but is not limited to this.Clicking operation
Can include single clicing on operation, it is also possible to include repeatedly clicking operation.
Preset focal point control instruction can be preset clicking operation, can also for preset slide (such as, slide
The start position of dynamic track mates with the start position preset), but it is not limited to this, it is also possible to it is other touch operation, tool
Body can be configured according to practical situation, does not limits.
When the touch operation that terminal check the first touch data is corresponding is default focal point control instruction, perform step
S203;Otherwise it is left intact, terminates this flow process, return step S201.
S203: if touch operation corresponding to described first touch data is default focal point control instruction, determine described
The area of the first touch area that the first touch data is corresponding and primary importance information.
Such as, when the touch operation that terminal check the first touch data is corresponding is the instruction of default focal point control, according to the
The positional information of one touch data determines area and first touch area of the first touch area that the first touch data is corresponding
Primary importance information.
S204: area and primary importance information according to described first touch area determine described focus zone of action pair
The area answered and second position information.
Terminal calculates virtual reality according to the area of the first touch area and the primary importance information of the first touch area
Area that focus zone of action in what comes into a driver's is corresponding and the second position information of focus zone of action, and according to focus behaviour area
Focus is moved to focus zone of action by area and focus zone of action second position information that territory is corresponding.
Wherein, the area that the area of the first touch area that the first touch data is corresponding is corresponding with focus zone of action is one by one
Corresponding.The second position information one_to_one corresponding that the primary importance information of the first touch area is corresponding with focus zone of action.
In the present embodiment, terminal passes through area and the primary importance letter of the first touch area corresponding to the first touch data
Breath, calculates size corresponding to the focus zone of action in virtual reality what comes into a driver's and positional information in virtual reality what comes into a driver's
Focus zone of action carry out coarse localization, the focus in virtual reality what comes into a driver's is moved to this focus obtain region.
Further, step S204 may include that the area according to described contact panel, the virtual area of virtual existing what comes into a driver's
Determine area conversion coefficient;Area according to described area conversion coefficient, described first touch area determines described focus activity
The area that region is corresponding;Primary importance information according to described first touch area determines that described focus zone of action is corresponding
Two positional informationes.
Such as, seeing also Fig. 3, Fig. 3 is embodiment of the present invention contact panel and the schematic diagram of virtual reality what comes into a driver's.
Wherein, Fig. 3-1 is embodiment of the present invention contact panel schematic diagram, and Fig. 3-2 is the signal of embodiment of the present invention virtual reality what comes into a driver's
Figure.
As it is shown on figure 3, the area S of contact panel1The area that the region that surrounds for tetra-points of A, B, C, D is corresponding.Virtual existing
Virtual area S of real what comes into a driver's2The area that the region that surrounds for tetra-points of A', B', C', D' is corresponding.Initial point (the center of contact panel
Position) it is O, the initial point (center) of virtual reality what comes into a driver's is O'
The first touch area that first touch data of contact panel input is corresponding is E, and the focus of virtual reality what comes into a driver's is lived
Dynamic region is E'.
When the touch operation that terminal check the first touch data is corresponding is default focal point control instruction, and touch according to first
Touch the area and the of the first touch area of the first touch area that the positional information of data determines that the first touch data is corresponding
After one positional information, according to the area S of contact panel1, virtual area S of virtual reality what comes into a driver's2, the face of the first touch area
Long-pending SEDetermine area conversion coefficient K.Wherein, K=S2/S1。
Terminal is according to area conversion coefficient K, the area S of the first touch areaEDetermine the area that focus zone of action is corresponding
SE', i.e. SE'=K*SE。
Terminal is according to formula LOD/LOE=LO'C'/LO'E'Determine the second position information that focus zone of action is corresponding.Wherein,
LODFor the distance of contact panel initial point O to D point, LOEDistance for contact panel initial point O to E point;LO'D'For virtual reality what comes into a driver's
Former O' point to the distance of D' point, LO'E'The distance put for initial point O' to the E' of virtual reality what comes into a driver's.
S205: focus is moved to described Jiao by the area corresponding according to described focus zone of action and second position information
Point zone of action.
When the terminal size that focus zone of action is corresponding in determining virtual reality what comes into a driver's and second position information,
The focus of virtual reality what comes into a driver's is moved to this focus zone of action.
Seeing also Fig. 4, Fig. 4 is embodiment of the present invention focus schematic diagram.As shown in Figure 4, terminal by focus from virtual
The initial point of reality what comes into a driver's moves to focus zone of action E' corresponding to the first touch data.
It is understood that the focus in virtual reality what comes into a driver's can be moved the centre bit to focus zone of action by terminal
Put, it is also possible to be optional position, focus zone of action.The initial point of virtual reality what comes into a driver's is the center O' of virtual reality what comes into a driver's.
Focus is moved to focus zone of action corresponding to the first touch data by terminal, the of short duration change in focal position, by
The origin position (center O') of virtual reality what comes into a driver's is mobile to focus zone of action, and this focus can be in focus zone of action
Mobile.The origin position of virtual reality what comes into a driver's can be the center of virtual reality what comes into a driver's, it is also possible in contact panel
Heart position, does not limits.
When the terminal size that focus zone of action is corresponding in determining virtual reality what comes into a driver's and second position information,
The focus of virtual reality what comes into a driver's is moved to this focus zone of action.
Seeing also Fig. 4, Fig. 4 is embodiment of the present invention virtual reality focus schematic diagram.As shown in Figure 4, terminal is by void
Intend reality focus to move to focus zone of action E' corresponding to the first touch data from the initial point of virtual reality what comes into a driver's.
It is understood that the focus in virtual reality what comes into a driver's can be moved the centre bit to focus zone of action by terminal
Put, it is also possible to be optional position, focus zone of action.The initial point of virtual reality what comes into a driver's is the center O' of virtual reality what comes into a driver's.
S206: obtain the refinement information that described focus is corresponding, and adjust the position of described focus according to described refinement information.
Terminal, after focus being moved to focus zone of action, obtains the refinement information that this focus is corresponding, and according to micro-
Tune information adjusts the position of focus further, to realize being accurately positioned the focus in virtual reality what comes into a driver's, thus passes through
Menu bar or tab that this focus instruction user comprises from virtual reality what comes into a driver's select.
Wherein, the refinement information that focus is corresponding can by be user input by contact or contactless in the way of defeated
Enter, do not limit.Contactless mode can include the refinement information corresponding by head or eye input focus, from
And adjust the position of focus.
Further, step S204 be may include that and obtains the second touch data correspondence inputted by described contact panel
Refinement information, according to described refinement information adjust described focus position.
When the second touch data that user inputs in the first touch area that the first touch data is corresponding, to touch by second
When touching the focal position in data point reuse virtual reality what comes into a driver's, terminal obtains the fine setting corresponding to the second touch data of user's input
Information, adjusts the position of this focus according to the refinement information got.
Such as, refinement information corresponding to terminal gets the second touch data is for be adjusted to mesh by focus from current location
Cursor position, then terminal obtains the target position information that the second touch data is corresponding, and focus is moved to this target location.
Wherein, target location needs the position at the option place selected corresponding to user.
Terminal obtains the second touch data that user inputs in the first touch area that the first touch data is corresponding,
Further, step S206 may include that the refinement information obtained by contactless input, according to described fine setting
Information adjusts the position of described focus.
When user is by rotation head, during to adjust the position of terminal thus to adjust the focus in virtual reality what comes into a driver's, eventually
End obtains the movement locus that user's rotation head is corresponding, determines target position information according to this movement locus, thus by virtual existing
Focus in real what comes into a driver's is adjusted to this target location from current location.
Wherein, target location needs the position at the option place selected corresponding to user.Terminal can obtain user's head
The angle information rotated, determines, according to this angle information, the target position information that this angle is corresponding.
Further, when user's head still, the focus position of virtual reality what comes into a driver's is adjusted by rotating any one eye
When putting, terminal can also obtain the rotary motion trace of user's eyeball, determines this Rotation of eyeball track mesh according to the rotary motion trace of eyeball
Cursor position information.
If it is understood that terminal judges is in step S205, the focus of virtual reality what comes into a driver's is moved by terminal
Move the target location to focus zone of action, then, refinement information is zero, and terminal need not adjust further the position of focus
Put.
Such scheme, terminal is obtained the first touch data inputted by contact panel, and is determining the first touch data
When corresponding touch operation is default focal point control instruction, determine the focus in virtual reality what comes into a driver's according to the first touch data
The information of zone of action, and the focus in virtual reality what comes into a driver's is moved to focus zone of action;And it is corresponding to obtain focus
Refinement information, and the position of focus is adjusted according to refinement information, it is possible to virtual reality focus is accurately positioned, it is simple to user
The menu bar or the tab that are comprised from virtual reality what comes into a driver's by this focus are selected, thus control virtual reality what comes into a driver's figure
Picture.
Terminal carries out Primary Location by the first touch data to the focus of virtual reality what comes into a driver's, then by the second touch number
According to or contactless mode focusing position be finely adjusted, focus is moved to target location, it is possible to quickly and accurately to void
The focus intended in reality what comes into a driver's positions, and improves its locating speed and accurately.
See Fig. 5, be the schematic block diagram of a kind of terminal that the embodiment of the present invention provides.Terminal can be mobile phone, flat board
The mobile terminals such as computer, but it is not limited to this, it is also possible to for other-end, it is not construed as limiting herein.The terminal 300 of the present embodiment is wrapped
The each module included, for performing each step in embodiment corresponding to Fig. 1, specifically refers to embodiment corresponding to Fig. 1 and Fig. 1
In associated description, do not repeat.The terminal of the present embodiment includes: acquiring unit 510, determine unit 520, control unit
530 and adjustment unit 540.
Acquiring unit 510 is for obtaining the first touch data inputted by contact panel.
Such as, acquiring unit 510 obtains the first touch data inputted by contact panel.Acquiring unit 510 is by first
Touch data is to determining that unit 520 sends.
Determine that unit 520, for receiving the first touch data that acquiring unit 510 sends, determines that the first touch data is corresponding
Touch operation be whether default focal point control instruction.
Such as, determine that unit 520 receives the first touch data that acquiring unit 510 sends, determine the first touch data pair
Whether the touch operation answered is default focal point control instruction.
Determine that unit 520 will determine that result sends to control unit 530.
Control unit 530 determines, for receiving, the determination result that unit 520 sends, if it is determined that result is the first touch number
It is default focal point control instruction according to corresponding touch operation, determines the focus in virtual reality what comes into a driver's according to the first touch data
The information of zone of action, and the focus in virtual reality what comes into a driver's is moved to focus zone of action.
Such as, control unit 530 receives and determines the determination result that unit 520 sends, if it is determined that result is the first touch
Touch operation corresponding to data is default focal point control instruction, determines Jiao in virtual reality what comes into a driver's according to the first touch data
The information of some zone of action, and the focus in virtual reality what comes into a driver's is moved to focus zone of action.
Focus in virtual reality what comes into a driver's is moved to focus zone of action by control unit 530, to adjustment unit 540
Send announcement information.
The announcement information that adjustment unit 540 sends for reception control unit 530, obtains the refinement information that focus is corresponding,
And the position of focus is adjusted according to refinement information.
Such as, the announcement information that adjustment unit 540 reception control unit 530 sends, obtain the refinement information that focus is corresponding,
And the position of focus is adjusted according to refinement information.
Such scheme, terminal is obtained the first touch data inputted by contact panel, and is determining the first touch data
When corresponding touch operation is default focal point control instruction, determine the focus in virtual reality what comes into a driver's according to the first touch data
The information of zone of action, and the focus in virtual reality what comes into a driver's is moved to focus zone of action;And it is corresponding to obtain focus
Refinement information, and the position of focus is adjusted according to refinement information, it is possible to virtual reality focus is accurately positioned, it is simple to user
The menu bar or the tab that are comprised from virtual reality what comes into a driver's by this focus are selected, thus control virtual reality what comes into a driver's figure
Picture.
Continuing with seeing Fig. 5, in another kind of embodiment, each module that terminal 300 includes is for performing reality corresponding to Fig. 2
Execute each step in example, specifically refer to the associated description in embodiment corresponding to Fig. 2 and Fig. 2, do not repeat.Specifically
Ground:
Acquiring unit 510 is for obtaining the first touch data inputted by contact panel.
Such as, acquiring unit 510 obtains the first touch data inputted by contact panel.Acquiring unit 510 is by first
Touch data is to determining that unit 520 sends.
Determine that unit 520, for receiving the first touch data that acquiring unit 510 sends, determines that the first touch data is corresponding
Touch operation be whether default focal point control instruction.
Such as, determine that unit 520 receives the first touch data that acquiring unit 510 sends, determine the first touch data pair
Whether the touch operation answered is default focal point control instruction.
Determine that unit 520 will determine that result sends to control unit 530.
Control unit 530 determines, for receiving, the determination result that unit 520 sends, if it is determined that result is the first touch number
It is default focal point control instruction according to corresponding touch operation, determines the focus in virtual reality what comes into a driver's according to the first touch data
The information of zone of action, and the focus in virtual reality what comes into a driver's is moved to focus zone of action.
Such as, control unit 530 receives and determines the determination result that unit 520 sends, if it is determined that result is the first touch
Touch operation corresponding to data is default focal point control instruction, determines Jiao in virtual reality what comes into a driver's according to the first touch data
The information of some zone of action, and the focus in virtual reality what comes into a driver's is moved to focus zone of action.
Further, control unit 530 determines, specifically for receiving, the determination result that unit 520 sends, if it is determined that knot
Fruit be the touch operation that the first touch data is corresponding be the instruction of default focal point control, determine that the first touch data is corresponding first
The area of touch area and primary importance information;And for the area according to the first touch area and primary importance information
Determine area corresponding to focus zone of action and second position information;Be additionally operable to according to area corresponding to focus zone of action with
And focus is moved to focus zone of action by second position information.
Such as, control unit 530 determines, specifically for receiving, the determination result that unit 520 sends, if it is determined that result is
The touch operation that first touch data is corresponding is default focal point control instruction, determines the first touch that the first touch data is corresponding
The area in region and primary importance information;And the area and primary importance information according to the first touch area determines focus
Area that zone of action is corresponding and second position information;The area corresponding according to focus zone of action and second position information
Focus is moved to focus zone of action.
Further, control unit 530 is true specifically for the virtual area of the area according to contact panel, virtual existing what comes into a driver's
Determine area conversion coefficient;Area according to area conversion coefficient, the first touch area determines the area that focus zone of action is corresponding;
Primary importance information according to the first touch area determines the second position information that focus zone of action is corresponding.
Such as, according to area, the virtual area of virtual existing what comes into a driver's of contact panel, control unit 530 determines that area conversion is
Number;Area according to area conversion coefficient, the first touch area determines the area that focus zone of action is corresponding;Touch according to first
The primary importance information in region determines the second position information that focus zone of action is corresponding.
Focus in virtual reality what comes into a driver's is moved to focus zone of action by control unit 530, to adjustment unit 540
Send announcement information.
The announcement information that adjustment unit 540 sends for reception control unit 530, obtains the refinement information that focus is corresponding,
And the position of focus is adjusted according to refinement information.
Such as, the announcement information that adjustment unit 540 reception control unit 530 sends, obtain the refinement information that focus is corresponding,
And the position of focus is adjusted according to refinement information.
Further, adjustment unit 540 is corresponding specifically for obtaining the second touch data inputted by contact panel
Refinement information, adjusts the position of focus according to refinement information.
Such as, adjustment unit 540 obtains the refinement information that the second touch data inputted by contact panel is corresponding, according to
Refinement information adjusts the position of focus.
Further, adjustment unit 540 is specifically for obtaining the refinement information by contactless input, according to fine setting letter
Breath adjusts the position of focus.
Such as, adjustment unit 540 obtains the refinement information by contactless input, adjusts focus according to refinement information
Position.
Such scheme, terminal is obtained the first touch data inputted by contact panel, and is determining the first touch data
When corresponding touch operation is default focal point control instruction, determine the focus in virtual reality what comes into a driver's according to the first touch data
The information of zone of action, and the focus in virtual reality what comes into a driver's is moved to focus zone of action;And it is corresponding to obtain focus
Refinement information, and the position of focus is adjusted according to refinement information, it is possible to virtual reality focus is accurately positioned, it is simple to user
The menu bar or the tab that are comprised from virtual reality what comes into a driver's by this focus are selected, thus control virtual reality what comes into a driver's figure
Picture.
Terminal carries out Primary Location by the first touch data to the focus of virtual reality what comes into a driver's, then by the second touch number
According to or contactless mode the focal position in virtual reality what comes into a driver's is finely adjusted, focus is moved to target location, it is possible to
Quickly and accurately the focus in virtual reality what comes into a driver's is positioned, improve its locating speed and accurately.
See Fig. 6, be a kind of terminal schematic block diagram of another embodiment of the present invention offer.The present embodiment as depicted
In terminal 400 may include that one or more processor 610;One or more input equipments 620, one or more outputs
Equipment 630 and memorizer 640.Above-mentioned processor 610, input equipment 620, outut device 630 and memorizer 640 pass through bus
650 connect.
Memorizer 640 is used for storing programmed instruction.
Processor 610 operation below performing according to the programmed instruction of memorizer 640 storage:
Processor 610 is for obtaining the first touch data inputted by contact panel.
Processor 610 is additionally operable to determine whether the touch operation that described first touch data is corresponding is default focal point control
Instruction.
Refer to if processor 610 is additionally operable to the focal point control that touch operation corresponding to described first touch data is default
Order, determines the information of focus zone of action in virtual reality what comes into a driver's, and virtual reality is regarded according to described first touch data
Focus in scape moves to described focus zone of action.
Processor 610 is additionally operable to obtain the refinement information that described focus is corresponding, and adjusts described according to described refinement information
The position of focus.
Further, processor 610 is specifically for determining the face of the first touch area that described first touch data is corresponding
Amass and primary importance information;And determine described for the area according to described first touch area and primary importance information
Area that focus zone of action is corresponding and second position information;Be additionally operable to according to area corresponding to described focus zone of action with
And focus is moved to described focus zone of action by second position information.
Further, processor 610 is specifically for the area according to described contact panel, the virtual area of virtual existing what comes into a driver's
Determine area conversion coefficient;Area according to described area conversion coefficient, described first touch area determines described focus activity
The area that region is corresponding;Primary importance information according to described first touch area determines that described focus zone of action is corresponding
Two positional informationes.
Further, processor 610 is corresponding specifically for obtaining the second touch data inputted by described contact panel
Refinement information, according to described refinement information adjust described focus position.
Further, processor 610 is specifically for obtaining the refinement information by contactless input, according to described fine setting
Information adjusts the position of described focus.
Such scheme, terminal obtains the face information that the spot photograph needing to share is corresponding;According to default face characteristic
Data and the corresponding relation of associated person information, obtain the associated person information that face information is corresponding;According to the connection that face information is corresponding
It is that people's information sends spot photograph.Owing to terminal need not manually select the associated person information of reception photograph, it is possible to save and search
The time of contact person, it is effectively improved photograph and shares efficiency;And terminal can be according to contact person corresponding to the photograph of captured in real-time
Information, is transmitted the photograph of captured in real-time, additionally it is possible to saves the time of the photograph selecting needs to share, improves phase further
Sheet shares efficiency.
Terminal before shooting photograph, and can not get any face characteristic data correspondence that preview image comprises
Associated person information time, it is possible to prompting user inputs the associated person information that lacks, and sets up this associated person information and preview image
In lack the corresponding relation between the face characteristic data of associated person information, to be saved in local data base, it is possible to ensure equal energy
Get and the associated person information of the face characteristic Data Matching in captured photograph, enable a user to clap in real time
The photograph taken the photograph is shared with the All Contacts that spot photograph is corresponding, to prevent when sharing photograph, and any face in spot photograph
Characteristic lacks the situation of corresponding associated person information, it is possible to for user save input lack associated person information time
Between, improve further and share efficiency.
Should be appreciated that in embodiments of the present invention, alleged processor 610 can be CPU (Central
Processing Unit, CPU), this processor can also is that other general processors, digital signal processor (Digital
Signal Processor, DSP), special IC (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other FPGAs
Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at
Reason device can also be the processor etc. of any routine.
Input equipment 620 can include that Trackpad, fingerprint adopt sensor (for gathering the finger print information of user and fingerprint
Directional information), mike etc., outut device 630 can include display (LCD etc.), speaker etc..
This memorizer 640 can include read only memory and random access memory, and to processor 610 provide instruction and
Data.A part for memorizer 640 can also include nonvolatile RAM.Such as, memorizer 640 can also be deposited
The information of storage device type.
In implementing, processor 610, input equipment 620 described in the embodiment of the present invention, outut device 630 can
Perform described in first embodiment and second embodiment of the control method of the virtual reality focus that the embodiment of the present invention provides
Implementation, it is possible to perform the implementation of terminal described by the embodiment of the present invention, do not repeat them here.
Those of ordinary skill in the art are it is to be appreciated that combine the list of each example that the embodiments described herein describes
Unit and algorithm steps, it is possible to electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware
With the interchangeability of software, the most generally describe composition and the step of each example according to function.This
A little functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Specially
Industry technical staff can use different methods to realize described function to each specifically should being used for, but this realization is not
It is considered as beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience of description and succinctly, and the end of foregoing description
End and the specific works process of unit, be referred to the corresponding process in preceding method embodiment, do not repeat them here.
In several embodiments provided herein, it should be understood that disclosed terminal and method, can be passed through it
Its mode realizes.Such as, device embodiment described above is only schematically, such as, and the division of described unit, only
Being only a kind of logic function to divide, actual can have other dividing mode, the most multiple unit or assembly to tie when realizing
Close or be desirably integrated into another system, or some features can be ignored, or not performing.It addition, shown or discussed phase
Coupling between Hu or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication
Connect, it is also possible to be electric, machinery or other form connect.
Step in embodiment of the present invention method can carry out order according to actual needs and adjust, merges and delete.
Unit in embodiment of the present invention terminal can merge according to actual needs, divides and delete.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit
The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize embodiment of the present invention scheme
Purpose.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to be that two or more unit are integrated in a unit.Above-mentioned integrated
Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit
Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part in other words prior art contributed, or this technical scheme completely or partially can be with the form of software product
Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer
Equipment (can be personal computer, server, or the network equipment etc.) performs the complete of method described in each embodiment of the present invention
Portion or part steps.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art, in the technical scope that the invention discloses, can readily occur in the amendment of various equivalence or replace
Changing, these amendments or replacement all should be contained within protection scope of the present invention.Therefore, protection scope of the present invention should be with right
The protection domain required is as the criterion.
Claims (10)
1. the control method of a virtual reality focus, it is characterised in that described method includes:
Obtain the first touch data inputted by contact panel;
Determine whether the touch operation that described first touch data is corresponding is default focal point control instruction;
If the touch operation that described first touch data is corresponding is default focal point control instruction, touch number according to described first
According to the information of the focus zone of action determined in virtual reality what comes into a driver's, and the focus in described virtual reality what comes into a driver's is moved to institute
State focus zone of action;
Obtain the refinement information that described focus is corresponding, and adjust the position of described focus according to described refinement information.
Method the most according to claim 1, it is characterised in that if the touch behaviour that described first touch data is corresponding
As default focal point control instruction, determine focus zone of action in virtual reality what comes into a driver's according to described first touch data
Information, and the focus in described virtual reality what comes into a driver's is moved include to described focus zone of action:
Determine area and the primary importance information of the first touch area that described first touch data is corresponding;
Area and primary importance information according to described first touch area determine the area that described focus zone of action is corresponding
And second position information;
Focus is moved to described focus behaviour area by the area corresponding according to described focus zone of action and second position information
Territory.
Method the most according to claim 2, it is characterised in that the described area according to described first touch area and
One positional information determines that area corresponding to described focus zone of action and second position information include:
Area, the virtual area of virtual existing what comes into a driver's according to described contact panel determine area conversion coefficient;
Area according to described area conversion coefficient, described first touch area determines the face that described focus zone of action is corresponding
Long-pending;
Primary importance information according to described first touch area determines the second position information that described focus zone of action is corresponding.
4. according to the method described in any one of claims 1 to 3, it is characterised in that the fine setting that the described focus of described acquisition is corresponding
Information, and include according to the position of the described refinement information described focus of adjustment: obtain second inputted by described contact panel
The refinement information that touch data is corresponding, adjusts the position of described focus according to described refinement information.
5. according to the method described in any one of claims 1 to 3, it is characterised in that the fine setting that the described focus of described acquisition is corresponding
Information, and include according to the position of the described refinement information described focus of adjustment:
Obtain the refinement information by contactless input, adjust the position of described focus according to described refinement information.
6. a terminal, it is characterised in that described terminal includes:
Acquiring unit, for obtaining the first touch data inputted by contact panel;
Determine unit, for determining whether the touch operation that described first touch data is corresponding is default focal point control instruction;
Control unit, if being default focal point control instruction for the touch operation that described first touch data is corresponding, according to
Described first touch data determines the information of the focus zone of action in virtual reality what comes into a driver's, and by described virtual reality what comes into a driver's
Focus move to described focus zone of action;
Adjustment unit, for obtaining the refinement information that described focus is corresponding, and adjusts described focus according to described refinement information
Position.
Terminal the most according to claim 6, it is characterised in that described control unit is specifically for determining that described first touches
The area of the first touch area that data are corresponding and primary importance information;And for the face according to described first touch area
Long-pending and primary importance information determines area corresponding to described focus zone of action and second position information;It is additionally operable to according to institute
State area corresponding to focus zone of action and the second position information focus to be moved to described focus zone of action.
Terminal the most according to claim 7, it is characterised in that described control unit is specifically for according to described contact panel
Area, the virtual area of virtual existing what comes into a driver's determine area conversion coefficient;According to described area conversion coefficient, described first touch
The area in region determines the area that described focus zone of action is corresponding;Primary importance information according to described first touch area is true
The second position information that fixed described focus zone of action is corresponding.
9. according to the terminal described in any one of claim 6 to 8, it is characterised in that described adjustment unit is logical specifically for obtaining
Cross the refinement information that the second touch data of described contact panel input is corresponding, adjust described focus according to described refinement information
Position.
10. according to the terminal described in any one of claim 6 to 8, it is characterised in that described adjustment unit is logical specifically for obtaining
Cross the refinement information of contactless input, adjust the position of described focus according to described refinement information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610571853.XA CN106201213A (en) | 2016-07-19 | 2016-07-19 | The control method of a kind of virtual reality focus and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610571853.XA CN106201213A (en) | 2016-07-19 | 2016-07-19 | The control method of a kind of virtual reality focus and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106201213A true CN106201213A (en) | 2016-12-07 |
Family
ID=57493507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610571853.XA Withdrawn CN106201213A (en) | 2016-07-19 | 2016-07-19 | The control method of a kind of virtual reality focus and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106201213A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112068757A (en) * | 2020-08-03 | 2020-12-11 | 北京理工大学 | Target selection method and system for virtual reality |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103733115A (en) * | 2011-06-30 | 2014-04-16 | 谷歌公司 | Wearable computer with curved display and navigation tool |
CN104471521A (en) * | 2012-05-09 | 2015-03-25 | 苹果公司 | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
US20150235418A1 (en) * | 2013-11-27 | 2015-08-20 | Magic Leap, Inc. | Determining user accommodation to display an image at a desired focal distance using freeform optics |
CN104914985A (en) * | 2014-03-13 | 2015-09-16 | 扬智科技股份有限公司 | Gesture control method and system and video flowing processing device |
CN105068653A (en) * | 2015-07-22 | 2015-11-18 | 深圳多新哆技术有限责任公司 | Method and apparatus for determining touch event in virtual space |
CN105075254A (en) * | 2013-03-28 | 2015-11-18 | 索尼公司 | Image processing device and method, and program |
-
2016
- 2016-07-19 CN CN201610571853.XA patent/CN106201213A/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103733115A (en) * | 2011-06-30 | 2014-04-16 | 谷歌公司 | Wearable computer with curved display and navigation tool |
CN104471521A (en) * | 2012-05-09 | 2015-03-25 | 苹果公司 | Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object |
CN105075254A (en) * | 2013-03-28 | 2015-11-18 | 索尼公司 | Image processing device and method, and program |
US20150235418A1 (en) * | 2013-11-27 | 2015-08-20 | Magic Leap, Inc. | Determining user accommodation to display an image at a desired focal distance using freeform optics |
CN104914985A (en) * | 2014-03-13 | 2015-09-16 | 扬智科技股份有限公司 | Gesture control method and system and video flowing processing device |
CN105068653A (en) * | 2015-07-22 | 2015-11-18 | 深圳多新哆技术有限责任公司 | Method and apparatus for determining touch event in virtual space |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112068757A (en) * | 2020-08-03 | 2020-12-11 | 北京理工大学 | Target selection method and system for virtual reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2022228121B2 (en) | User interfaces for simulated depth effects | |
US10379733B2 (en) | Causing display of a three dimensional graphical user interface with dynamic selectability of items | |
EP2939095B1 (en) | Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics | |
EP4300430A2 (en) | Device, method, and graphical user interface for composing cgr files | |
US11250604B2 (en) | Device, method, and graphical user interface for presenting CGR files | |
WO2020068374A1 (en) | Audio assisted enrollment | |
US11604580B2 (en) | Configuration of application execution spaces and sub-spaces for sharing data on a mobile touch screen device | |
CN106210521A (en) | A kind of photographic method and terminal | |
US11363071B2 (en) | User interfaces for managing a local network | |
WO2022055753A1 (en) | User interfaces for indicating distance | |
WO2020245647A1 (en) | User interface for managing input techniques | |
CN106249879A (en) | The display packing of a kind of virtual reality image and terminal | |
CN106201222A (en) | The display packing of a kind of virtual reality interface and terminal | |
CN106227752A (en) | A kind of photograph sharing method and terminal | |
CN106201213A (en) | The control method of a kind of virtual reality focus and terminal | |
KR20180088859A (en) | A method for changing graphics processing resolution according to a scenario, | |
CN106155346A (en) | A kind of generation method and apparatus of word of expressing one's feelings | |
CN106231190A (en) | A kind of based on the double formation method opened of front camera and rear camera and terminal | |
CN109388244B (en) | Gravity center adjusting method and device for terminal equipment and terminal equipment | |
CN116027962A (en) | Virtual key setting method, device, equipment and computer storage medium | |
WO2022260860A1 (en) | User interfaces for managing passwords | |
CN106227396A (en) | A kind of method showing information and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20161207 |